Mar 19 09:16:30.334907 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 19 09:16:31.023801 master-0 kubenswrapper[4035]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:16:31.023801 master-0 kubenswrapper[4035]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 19 09:16:31.023801 master-0 kubenswrapper[4035]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:16:31.025193 master-0 kubenswrapper[4035]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:16:31.025193 master-0 kubenswrapper[4035]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 09:16:31.025193 master-0 kubenswrapper[4035]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:16:31.025629 master-0 kubenswrapper[4035]: I0319 09:16:31.025425 4035 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030363 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030384 4035 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030391 4035 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030396 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030402 4035 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030408 4035 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030413 4035 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030419 4035 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030424 4035 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030430 4035 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030436 4035 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030441 4035 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030446 4035 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030451 4035 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030456 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030461 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030466 4035 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030471 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030476 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:16:31.030464 master-0 kubenswrapper[4035]: W0319 09:16:31.030483 4035 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030489 4035 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030495 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030502 4035 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030520 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030527 4035 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030535 4035 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030558 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030565 4035 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030570 4035 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030576 4035 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030582 4035 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030587 4035 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030592 4035 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030598 4035 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030603 4035 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030609 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030614 4035 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030620 4035 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:16:31.031680 master-0 kubenswrapper[4035]: W0319 09:16:31.030625 4035 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030630 4035 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030635 4035 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030640 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030653 4035 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030659 4035 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030668 4035 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030674 4035 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030681 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030686 4035 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030691 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030697 4035 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030702 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030708 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030713 4035 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030719 4035 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030727 4035 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030733 4035 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:16:31.032807 master-0 kubenswrapper[4035]: W0319 09:16:31.030740 4035 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030746 4035 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030751 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030757 4035 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030763 4035 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030768 4035 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030774 4035 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030779 4035 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030784 4035 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030789 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030796 4035 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030801 4035 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030806 4035 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030811 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030816 4035 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: W0319 09:16:31.030823 4035 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: I0319 09:16:31.031866 4035 flags.go:64] FLAG: --address="0.0.0.0" Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: I0319 09:16:31.031897 4035 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: I0319 09:16:31.031913 4035 flags.go:64] FLAG: --anonymous-auth="true" Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: I0319 09:16:31.031924 4035 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: I0319 09:16:31.031937 4035 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: I0319 09:16:31.031948 4035 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 19 09:16:31.033696 master-0 kubenswrapper[4035]: I0319 09:16:31.031960 4035 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.031971 4035 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.031980 4035 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.031990 4035 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032000 4035 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032011 4035 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032022 4035 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032032 4035 flags.go:64] FLAG: --cgroup-root="" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032041 4035 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032050 4035 flags.go:64] FLAG: --client-ca-file="" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032060 4035 flags.go:64] FLAG: --cloud-config="" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032069 4035 flags.go:64] FLAG: --cloud-provider="" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032078 4035 flags.go:64] FLAG: --cluster-dns="[]" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032091 4035 flags.go:64] FLAG: --cluster-domain="" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032099 4035 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032110 4035 flags.go:64] FLAG: --config-dir="" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032119 4035 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032128 4035 flags.go:64] FLAG: --container-log-max-files="5" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032140 4035 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032149 4035 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032173 4035 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032183 4035 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032274 4035 flags.go:64] FLAG: --contention-profiling="false" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032285 4035 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032295 4035 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 19 09:16:31.034656 master-0 kubenswrapper[4035]: I0319 09:16:31.032305 4035 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032314 4035 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032326 4035 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032335 4035 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032344 4035 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032368 4035 flags.go:64] FLAG: --enable-load-reader="false" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032378 4035 flags.go:64] FLAG: --enable-server="true" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032387 4035 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032399 4035 flags.go:64] FLAG: --event-burst="100" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032408 4035 flags.go:64] FLAG: --event-qps="50" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032436 4035 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032448 4035 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032457 4035 flags.go:64] FLAG: --eviction-hard="" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032469 4035 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032478 4035 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032487 4035 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032497 4035 flags.go:64] FLAG: --eviction-soft="" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032506 4035 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032515 4035 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032524 4035 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032533 4035 flags.go:64] FLAG: --experimental-mounter-path="" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032568 4035 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032577 4035 flags.go:64] FLAG: --fail-swap-on="true" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032586 4035 flags.go:64] FLAG: --feature-gates="" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032597 4035 flags.go:64] FLAG: --file-check-frequency="20s" Mar 19 09:16:31.035730 master-0 kubenswrapper[4035]: I0319 09:16:31.032606 4035 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032616 4035 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032625 4035 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032634 4035 flags.go:64] FLAG: --healthz-port="10248" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032645 4035 flags.go:64] FLAG: --help="false" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032654 4035 flags.go:64] FLAG: --hostname-override="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032663 4035 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032673 4035 flags.go:64] FLAG: --http-check-frequency="20s" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032682 4035 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032692 4035 flags.go:64] FLAG: --image-credential-provider-config="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032702 4035 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032711 4035 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032720 4035 flags.go:64] FLAG: --image-service-endpoint="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032729 4035 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032737 4035 flags.go:64] FLAG: --kube-api-burst="100" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032746 4035 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032756 4035 flags.go:64] FLAG: --kube-api-qps="50" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032765 4035 flags.go:64] FLAG: --kube-reserved="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032774 4035 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032783 4035 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032793 4035 flags.go:64] FLAG: --kubelet-cgroups="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032803 4035 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032815 4035 flags.go:64] FLAG: --lock-file="" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032824 4035 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032833 4035 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 19 09:16:31.036943 master-0 kubenswrapper[4035]: I0319 09:16:31.032843 4035 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032857 4035 flags.go:64] FLAG: --log-json-split-stream="false" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032866 4035 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032875 4035 flags.go:64] FLAG: --log-text-split-stream="false" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032885 4035 flags.go:64] FLAG: --logging-format="text" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032894 4035 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032904 4035 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032913 4035 flags.go:64] FLAG: --manifest-url="" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032922 4035 flags.go:64] FLAG: --manifest-url-header="" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032934 4035 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032943 4035 flags.go:64] FLAG: --max-open-files="1000000" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032955 4035 flags.go:64] FLAG: --max-pods="110" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032964 4035 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032974 4035 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032983 4035 flags.go:64] FLAG: --memory-manager-policy="None" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.032992 4035 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033001 4035 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033011 4035 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033020 4035 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033040 4035 flags.go:64] FLAG: --node-status-max-images="50" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033049 4035 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033058 4035 flags.go:64] FLAG: --oom-score-adj="-999" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033067 4035 flags.go:64] FLAG: --pod-cidr="" Mar 19 09:16:31.038033 master-0 kubenswrapper[4035]: I0319 09:16:31.033076 4035 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033091 4035 flags.go:64] FLAG: --pod-manifest-path="" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033100 4035 flags.go:64] FLAG: --pod-max-pids="-1" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033109 4035 flags.go:64] FLAG: --pods-per-core="0" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033118 4035 flags.go:64] FLAG: --port="10250" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033128 4035 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033137 4035 flags.go:64] FLAG: --provider-id="" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033146 4035 flags.go:64] FLAG: --qos-reserved="" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033155 4035 flags.go:64] FLAG: --read-only-port="10255" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033165 4035 flags.go:64] FLAG: --register-node="true" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033177 4035 flags.go:64] FLAG: --register-schedulable="true" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033186 4035 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033201 4035 flags.go:64] FLAG: --registry-burst="10" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033210 4035 flags.go:64] FLAG: --registry-qps="5" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033219 4035 flags.go:64] FLAG: --reserved-cpus="" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033228 4035 flags.go:64] FLAG: --reserved-memory="" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033240 4035 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033249 4035 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033258 4035 flags.go:64] FLAG: --rotate-certificates="false" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033267 4035 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033276 4035 flags.go:64] FLAG: --runonce="false" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033286 4035 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033299 4035 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033309 4035 flags.go:64] FLAG: --seccomp-default="false" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033318 4035 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033327 4035 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 19 09:16:31.039078 master-0 kubenswrapper[4035]: I0319 09:16:31.033338 4035 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033347 4035 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033356 4035 flags.go:64] FLAG: --storage-driver-password="root" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033365 4035 flags.go:64] FLAG: --storage-driver-secure="false" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033375 4035 flags.go:64] FLAG: --storage-driver-table="stats" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033384 4035 flags.go:64] FLAG: --storage-driver-user="root" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033393 4035 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033403 4035 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033412 4035 flags.go:64] FLAG: --system-cgroups="" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033421 4035 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033435 4035 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033444 4035 flags.go:64] FLAG: --tls-cert-file="" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033454 4035 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033466 4035 flags.go:64] FLAG: --tls-min-version="" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033475 4035 flags.go:64] FLAG: --tls-private-key-file="" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033484 4035 flags.go:64] FLAG: --topology-manager-policy="none" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033493 4035 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033503 4035 flags.go:64] FLAG: --topology-manager-scope="container" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033512 4035 flags.go:64] FLAG: --v="2" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033524 4035 flags.go:64] FLAG: --version="false" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033536 4035 flags.go:64] FLAG: --vmodule="" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033571 4035 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: I0319 09:16:31.033581 4035 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: W0319 09:16:31.033814 4035 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:16:31.040295 master-0 kubenswrapper[4035]: W0319 09:16:31.033826 4035 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033860 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033871 4035 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033880 4035 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033896 4035 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033904 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033931 4035 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033940 4035 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033950 4035 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033958 4035 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033967 4035 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033975 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033986 4035 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.033997 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.034006 4035 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.034016 4035 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.034026 4035 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.034035 4035 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.034044 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:16:31.041506 master-0 kubenswrapper[4035]: W0319 09:16:31.034053 4035 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034062 4035 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034071 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034079 4035 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034088 4035 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034097 4035 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034105 4035 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034113 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034121 4035 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034132 4035 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034141 4035 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034150 4035 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034158 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034167 4035 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034175 4035 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034183 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034192 4035 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034202 4035 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034211 4035 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:16:31.042494 master-0 kubenswrapper[4035]: W0319 09:16:31.034218 4035 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034227 4035 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034234 4035 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034242 4035 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034250 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034258 4035 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034267 4035 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034275 4035 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034282 4035 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034290 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034298 4035 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034306 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034313 4035 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034324 4035 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034339 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034348 4035 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034358 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034366 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034375 4035 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:16:31.043624 master-0 kubenswrapper[4035]: W0319 09:16:31.034383 4035 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034391 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034399 4035 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034407 4035 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034415 4035 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034423 4035 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034432 4035 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034440 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034448 4035 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034455 4035 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034463 4035 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034528 4035 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034567 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: W0319 09:16:31.034580 4035 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:16:31.044919 master-0 kubenswrapper[4035]: I0319 09:16:31.034597 4035 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:16:31.046493 master-0 kubenswrapper[4035]: I0319 09:16:31.046418 4035 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 19 09:16:31.046493 master-0 kubenswrapper[4035]: I0319 09:16:31.046475 4035 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 09:16:31.046712 master-0 kubenswrapper[4035]: W0319 09:16:31.046677 4035 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:16:31.046712 master-0 kubenswrapper[4035]: W0319 09:16:31.046699 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:16:31.046712 master-0 kubenswrapper[4035]: W0319 09:16:31.046711 4035 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046723 4035 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046734 4035 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046744 4035 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046754 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046765 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046775 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046785 4035 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046796 4035 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046811 4035 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046828 4035 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046841 4035 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046852 4035 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046863 4035 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046873 4035 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046883 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046893 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046904 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046915 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:16:31.046909 master-0 kubenswrapper[4035]: W0319 09:16:31.046925 4035 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.046935 4035 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.046945 4035 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.046955 4035 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.046965 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.046975 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.046985 4035 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.046995 4035 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047006 4035 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047015 4035 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047029 4035 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047040 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047050 4035 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047064 4035 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047077 4035 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047090 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047100 4035 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047111 4035 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047123 4035 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:16:31.048144 master-0 kubenswrapper[4035]: W0319 09:16:31.047134 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047146 4035 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047157 4035 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047167 4035 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047179 4035 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047190 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047201 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047212 4035 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047224 4035 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047234 4035 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047245 4035 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047254 4035 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047264 4035 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047274 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047284 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047297 4035 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047307 4035 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047317 4035 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047327 4035 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047337 4035 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:16:31.049727 master-0 kubenswrapper[4035]: W0319 09:16:31.047347 4035 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047357 4035 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047367 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047377 4035 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047387 4035 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047397 4035 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047407 4035 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047421 4035 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047434 4035 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047447 4035 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047460 4035 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047472 4035 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: I0319 09:16:31.047489 4035 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047847 4035 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047872 4035 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:16:31.051042 master-0 kubenswrapper[4035]: W0319 09:16:31.047884 4035 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047897 4035 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047908 4035 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047918 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047929 4035 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047940 4035 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047953 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047963 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047974 4035 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047984 4035 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.047995 4035 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048006 4035 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048016 4035 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048027 4035 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048037 4035 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048047 4035 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048058 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048072 4035 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048087 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048099 4035 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:16:31.052089 master-0 kubenswrapper[4035]: W0319 09:16:31.048110 4035 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048124 4035 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048138 4035 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048149 4035 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048160 4035 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048170 4035 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048181 4035 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048192 4035 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048203 4035 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048213 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048255 4035 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048270 4035 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048282 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048294 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048304 4035 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048315 4035 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048328 4035 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048341 4035 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048351 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:16:31.053434 master-0 kubenswrapper[4035]: W0319 09:16:31.048362 4035 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048372 4035 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048382 4035 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048392 4035 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048403 4035 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048413 4035 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048424 4035 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048433 4035 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048444 4035 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048454 4035 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048465 4035 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048475 4035 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048485 4035 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048498 4035 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048512 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048525 4035 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048538 4035 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048581 4035 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048592 4035 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048603 4035 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:16:31.054927 master-0 kubenswrapper[4035]: W0319 09:16:31.048613 4035 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048623 4035 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048634 4035 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048644 4035 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048656 4035 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048666 4035 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048676 4035 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048686 4035 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048698 4035 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048708 4035 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: W0319 09:16:31.048718 4035 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: I0319 09:16:31.048735 4035 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:16:31.056359 master-0 kubenswrapper[4035]: I0319 09:16:31.050146 4035 server.go:940] "Client rotation is on, will bootstrap in background" Mar 19 09:16:31.057171 master-0 kubenswrapper[4035]: I0319 09:16:31.056466 4035 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 19 09:16:31.059136 master-0 kubenswrapper[4035]: I0319 09:16:31.059084 4035 server.go:997] "Starting client certificate rotation" Mar 19 09:16:31.059136 master-0 kubenswrapper[4035]: I0319 09:16:31.059132 4035 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 19 09:16:31.059413 master-0 kubenswrapper[4035]: I0319 09:16:31.059359 4035 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 09:16:31.089210 master-0 kubenswrapper[4035]: I0319 09:16:31.089134 4035 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 09:16:31.092718 master-0 kubenswrapper[4035]: I0319 09:16:31.092655 4035 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 09:16:31.097006 master-0 kubenswrapper[4035]: E0319 09:16:31.096901 4035 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:31.112423 master-0 kubenswrapper[4035]: I0319 09:16:31.112345 4035 log.go:25] "Validated CRI v1 runtime API" Mar 19 09:16:31.121529 master-0 kubenswrapper[4035]: I0319 09:16:31.121462 4035 log.go:25] "Validated CRI v1 image API" Mar 19 09:16:31.125377 master-0 kubenswrapper[4035]: I0319 09:16:31.125295 4035 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 19 09:16:31.129338 master-0 kubenswrapper[4035]: I0319 09:16:31.129254 4035 fs.go:135] Filesystem UUIDs: map[433c3f11-76c1-4144-a2fc-7b9790746712:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 19 09:16:31.129338 master-0 kubenswrapper[4035]: I0319 09:16:31.129307 4035 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 19 09:16:31.158307 master-0 kubenswrapper[4035]: I0319 09:16:31.157845 4035 manager.go:217] Machine: {Timestamp:2026-03-19 09:16:31.155250571 +0000 UTC m=+0.613865542 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:3e104eb08e5948b08517e4448d4a842b SystemUUID:3e104eb0-8e59-48b0-8517-e4448d4a842b BootID:5d651922-4f48-42db-81f8-e0fd55710ee7 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:33:06:4c Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:34:d3:c4 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:7e:e2:1c:fe:3d:73 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 19 09:16:31.158307 master-0 kubenswrapper[4035]: I0319 09:16:31.158236 4035 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 19 09:16:31.158585 master-0 kubenswrapper[4035]: I0319 09:16:31.158458 4035 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 19 09:16:31.160119 master-0 kubenswrapper[4035]: I0319 09:16:31.160074 4035 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 09:16:31.160528 master-0 kubenswrapper[4035]: I0319 09:16:31.160456 4035 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 09:16:31.160976 master-0 kubenswrapper[4035]: I0319 09:16:31.160517 4035 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 09:16:31.161062 master-0 kubenswrapper[4035]: I0319 09:16:31.161005 4035 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 09:16:31.161062 master-0 kubenswrapper[4035]: I0319 09:16:31.161029 4035 container_manager_linux.go:303] "Creating device plugin manager" Mar 19 09:16:31.161062 master-0 kubenswrapper[4035]: I0319 09:16:31.161049 4035 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 09:16:31.161273 master-0 kubenswrapper[4035]: I0319 09:16:31.161092 4035 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 09:16:31.161380 master-0 kubenswrapper[4035]: I0319 09:16:31.161335 4035 state_mem.go:36] "Initialized new in-memory state store" Mar 19 09:16:31.161536 master-0 kubenswrapper[4035]: I0319 09:16:31.161496 4035 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 19 09:16:31.165439 master-0 kubenswrapper[4035]: I0319 09:16:31.165328 4035 kubelet.go:418] "Attempting to sync node with API server" Mar 19 09:16:31.165439 master-0 kubenswrapper[4035]: I0319 09:16:31.165367 4035 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 09:16:31.165439 master-0 kubenswrapper[4035]: I0319 09:16:31.165446 4035 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 19 09:16:31.165699 master-0 kubenswrapper[4035]: I0319 09:16:31.165470 4035 kubelet.go:324] "Adding apiserver pod source" Mar 19 09:16:31.165699 master-0 kubenswrapper[4035]: I0319 09:16:31.165504 4035 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 09:16:31.172003 master-0 kubenswrapper[4035]: W0319 09:16:31.171904 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:31.172132 master-0 kubenswrapper[4035]: I0319 09:16:31.171999 4035 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 19 09:16:31.172445 master-0 kubenswrapper[4035]: W0319 09:16:31.171985 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:31.172599 master-0 kubenswrapper[4035]: E0319 09:16:31.172023 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:31.172599 master-0 kubenswrapper[4035]: E0319 09:16:31.172530 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:31.174482 master-0 kubenswrapper[4035]: I0319 09:16:31.174426 4035 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 09:16:31.174913 master-0 kubenswrapper[4035]: I0319 09:16:31.174860 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 19 09:16:31.174913 master-0 kubenswrapper[4035]: I0319 09:16:31.174908 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.174927 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.174947 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.174965 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.174982 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.175000 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.175016 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.175061 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.175081 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.175106 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 19 09:16:31.175122 master-0 kubenswrapper[4035]: I0319 09:16:31.175135 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 19 09:16:31.178754 master-0 kubenswrapper[4035]: I0319 09:16:31.178703 4035 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 19 09:16:31.179593 master-0 kubenswrapper[4035]: I0319 09:16:31.179507 4035 server.go:1280] "Started kubelet" Mar 19 09:16:31.181261 master-0 kubenswrapper[4035]: I0319 09:16:31.181171 4035 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 09:16:31.181499 master-0 kubenswrapper[4035]: I0319 09:16:31.181144 4035 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 09:16:31.181455 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 19 09:16:31.181918 master-0 kubenswrapper[4035]: I0319 09:16:31.181534 4035 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 19 09:16:31.189692 master-0 kubenswrapper[4035]: I0319 09:16:31.189263 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:31.192094 master-0 kubenswrapper[4035]: I0319 09:16:31.190753 4035 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 09:16:31.192094 master-0 kubenswrapper[4035]: I0319 09:16:31.190975 4035 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 19 09:16:31.192094 master-0 kubenswrapper[4035]: I0319 09:16:31.191022 4035 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 09:16:31.192605 master-0 kubenswrapper[4035]: E0319 09:16:31.192134 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:16:31.192605 master-0 kubenswrapper[4035]: I0319 09:16:31.192158 4035 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 19 09:16:31.192605 master-0 kubenswrapper[4035]: I0319 09:16:31.192204 4035 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 19 09:16:31.192605 master-0 kubenswrapper[4035]: I0319 09:16:31.192309 4035 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 09:16:31.198577 master-0 kubenswrapper[4035]: I0319 09:16:31.198209 4035 factory.go:55] Registering systemd factory Mar 19 09:16:31.198577 master-0 kubenswrapper[4035]: I0319 09:16:31.198238 4035 factory.go:221] Registration of the systemd container factory successfully Mar 19 09:16:31.198577 master-0 kubenswrapper[4035]: I0319 09:16:31.198522 4035 reconstruct.go:97] "Volume reconstruction finished" Mar 19 09:16:31.198577 master-0 kubenswrapper[4035]: I0319 09:16:31.198534 4035 reconciler.go:26] "Reconciler: start to sync state" Mar 19 09:16:31.198949 master-0 kubenswrapper[4035]: I0319 09:16:31.198728 4035 server.go:449] "Adding debug handlers to kubelet server" Mar 19 09:16:31.199312 master-0 kubenswrapper[4035]: I0319 09:16:31.199238 4035 factory.go:153] Registering CRI-O factory Mar 19 09:16:31.199312 master-0 kubenswrapper[4035]: I0319 09:16:31.199275 4035 factory.go:221] Registration of the crio container factory successfully Mar 19 09:16:31.199503 master-0 kubenswrapper[4035]: I0319 09:16:31.199367 4035 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 19 09:16:31.199503 master-0 kubenswrapper[4035]: I0319 09:16:31.199395 4035 factory.go:103] Registering Raw factory Mar 19 09:16:31.199503 master-0 kubenswrapper[4035]: I0319 09:16:31.199412 4035 manager.go:1196] Started watching for new ooms in manager Mar 19 09:16:31.199503 master-0 kubenswrapper[4035]: W0319 09:16:31.199422 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:31.199503 master-0 kubenswrapper[4035]: E0319 09:16:31.199497 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:31.199894 master-0 kubenswrapper[4035]: E0319 09:16:31.199585 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 19 09:16:31.200144 master-0 kubenswrapper[4035]: I0319 09:16:31.200091 4035 manager.go:319] Starting recovery of all containers Mar 19 09:16:31.200950 master-0 kubenswrapper[4035]: E0319 09:16:31.199399 4035 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e33602020e959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.179458905 +0000 UTC m=+0.638073876,LastTimestamp:2026-03-19 09:16:31.179458905 +0000 UTC m=+0.638073876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:31.214688 master-0 kubenswrapper[4035]: E0319 09:16:31.214614 4035 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 19 09:16:31.218340 master-0 kubenswrapper[4035]: I0319 09:16:31.218290 4035 manager.go:324] Recovery completed Mar 19 09:16:31.230019 master-0 kubenswrapper[4035]: I0319 09:16:31.229980 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.232019 master-0 kubenswrapper[4035]: I0319 09:16:31.231953 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.232101 master-0 kubenswrapper[4035]: I0319 09:16:31.232051 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.232101 master-0 kubenswrapper[4035]: I0319 09:16:31.232082 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.233725 master-0 kubenswrapper[4035]: I0319 09:16:31.233689 4035 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 19 09:16:31.233725 master-0 kubenswrapper[4035]: I0319 09:16:31.233710 4035 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 19 09:16:31.233725 master-0 kubenswrapper[4035]: I0319 09:16:31.233730 4035 state_mem.go:36] "Initialized new in-memory state store" Mar 19 09:16:31.238203 master-0 kubenswrapper[4035]: I0319 09:16:31.238159 4035 policy_none.go:49] "None policy: Start" Mar 19 09:16:31.239210 master-0 kubenswrapper[4035]: I0319 09:16:31.239177 4035 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 09:16:31.239210 master-0 kubenswrapper[4035]: I0319 09:16:31.239206 4035 state_mem.go:35] "Initializing new in-memory state store" Mar 19 09:16:31.292598 master-0 kubenswrapper[4035]: E0319 09:16:31.292536 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:16:31.295559 master-0 kubenswrapper[4035]: I0319 09:16:31.295498 4035 manager.go:334] "Starting Device Plugin manager" Mar 19 09:16:31.295724 master-0 kubenswrapper[4035]: I0319 09:16:31.295690 4035 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 09:16:31.295724 master-0 kubenswrapper[4035]: I0319 09:16:31.295714 4035 server.go:79] "Starting device plugin registration server" Mar 19 09:16:31.296120 master-0 kubenswrapper[4035]: I0319 09:16:31.296089 4035 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 09:16:31.296192 master-0 kubenswrapper[4035]: I0319 09:16:31.296107 4035 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 09:16:31.296275 master-0 kubenswrapper[4035]: I0319 09:16:31.296254 4035 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 19 09:16:31.296352 master-0 kubenswrapper[4035]: I0319 09:16:31.296331 4035 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 19 09:16:31.296352 master-0 kubenswrapper[4035]: I0319 09:16:31.296345 4035 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 09:16:31.297872 master-0 kubenswrapper[4035]: E0319 09:16:31.297833 4035 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 09:16:31.328006 master-0 kubenswrapper[4035]: I0319 09:16:31.327915 4035 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 09:16:31.343946 master-0 kubenswrapper[4035]: I0319 09:16:31.330193 4035 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 09:16:31.343946 master-0 kubenswrapper[4035]: I0319 09:16:31.333151 4035 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 09:16:31.343946 master-0 kubenswrapper[4035]: I0319 09:16:31.333208 4035 kubelet.go:2335] "Starting kubelet main sync loop" Mar 19 09:16:31.343946 master-0 kubenswrapper[4035]: E0319 09:16:31.333285 4035 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 19 09:16:31.343946 master-0 kubenswrapper[4035]: W0319 09:16:31.335921 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:31.343946 master-0 kubenswrapper[4035]: E0319 09:16:31.335998 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:31.396294 master-0 kubenswrapper[4035]: I0319 09:16:31.396230 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.397974 master-0 kubenswrapper[4035]: I0319 09:16:31.397919 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.397974 master-0 kubenswrapper[4035]: I0319 09:16:31.397967 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.398166 master-0 kubenswrapper[4035]: I0319 09:16:31.397982 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.398166 master-0 kubenswrapper[4035]: I0319 09:16:31.398020 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:31.398956 master-0 kubenswrapper[4035]: E0319 09:16:31.398890 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 09:16:31.400800 master-0 kubenswrapper[4035]: E0319 09:16:31.400727 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 19 09:16:31.433983 master-0 kubenswrapper[4035]: I0319 09:16:31.433892 4035 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 09:16:31.434147 master-0 kubenswrapper[4035]: I0319 09:16:31.434002 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.435133 master-0 kubenswrapper[4035]: I0319 09:16:31.435089 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.435133 master-0 kubenswrapper[4035]: I0319 09:16:31.435128 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.435250 master-0 kubenswrapper[4035]: I0319 09:16:31.435141 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.435296 master-0 kubenswrapper[4035]: I0319 09:16:31.435249 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.435564 master-0 kubenswrapper[4035]: I0319 09:16:31.435509 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.435649 master-0 kubenswrapper[4035]: I0319 09:16:31.435608 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.436112 master-0 kubenswrapper[4035]: I0319 09:16:31.436078 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.436112 master-0 kubenswrapper[4035]: I0319 09:16:31.436109 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.436232 master-0 kubenswrapper[4035]: I0319 09:16:31.436117 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.436232 master-0 kubenswrapper[4035]: I0319 09:16:31.436215 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.436414 master-0 kubenswrapper[4035]: I0319 09:16:31.436306 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.436414 master-0 kubenswrapper[4035]: I0319 09:16:31.436364 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.436414 master-0 kubenswrapper[4035]: I0319 09:16:31.436388 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.437006 master-0 kubenswrapper[4035]: I0319 09:16:31.436975 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.437083 master-0 kubenswrapper[4035]: I0319 09:16:31.437019 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.437270 master-0 kubenswrapper[4035]: I0319 09:16:31.437230 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.437328 master-0 kubenswrapper[4035]: I0319 09:16:31.437275 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.437328 master-0 kubenswrapper[4035]: I0319 09:16:31.437291 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.437416 master-0 kubenswrapper[4035]: I0319 09:16:31.437398 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.437590 master-0 kubenswrapper[4035]: I0319 09:16:31.437520 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.437679 master-0 kubenswrapper[4035]: I0319 09:16:31.437616 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.438103 master-0 kubenswrapper[4035]: I0319 09:16:31.438051 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.438103 master-0 kubenswrapper[4035]: I0319 09:16:31.438072 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.438103 master-0 kubenswrapper[4035]: I0319 09:16:31.438084 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.438236 master-0 kubenswrapper[4035]: I0319 09:16:31.438161 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.438514 master-0 kubenswrapper[4035]: I0319 09:16:31.438277 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.438514 master-0 kubenswrapper[4035]: I0319 09:16:31.438321 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.438731 master-0 kubenswrapper[4035]: I0319 09:16:31.438515 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.438731 master-0 kubenswrapper[4035]: I0319 09:16:31.438540 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.438731 master-0 kubenswrapper[4035]: I0319 09:16:31.438563 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.438731 master-0 kubenswrapper[4035]: I0319 09:16:31.438693 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.438731 master-0 kubenswrapper[4035]: I0319 09:16:31.438728 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.438906 master-0 kubenswrapper[4035]: I0319 09:16:31.438744 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.438906 master-0 kubenswrapper[4035]: I0319 09:16:31.438766 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.438906 master-0 kubenswrapper[4035]: I0319 09:16:31.438750 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.439019 master-0 kubenswrapper[4035]: I0319 09:16:31.438927 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:31.439019 master-0 kubenswrapper[4035]: I0319 09:16:31.438948 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.439318 master-0 kubenswrapper[4035]: I0319 09:16:31.439220 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.439471 master-0 kubenswrapper[4035]: I0319 09:16:31.439432 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.439471 master-0 kubenswrapper[4035]: I0319 09:16:31.439469 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.439764 master-0 kubenswrapper[4035]: I0319 09:16:31.439481 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.440964 master-0 kubenswrapper[4035]: I0319 09:16:31.440261 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.440964 master-0 kubenswrapper[4035]: I0319 09:16:31.440302 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.440964 master-0 kubenswrapper[4035]: I0319 09:16:31.440312 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.499771 master-0 kubenswrapper[4035]: I0319 09:16:31.499701 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.499936 master-0 kubenswrapper[4035]: I0319 09:16:31.499780 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.499936 master-0 kubenswrapper[4035]: I0319 09:16:31.499826 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.499936 master-0 kubenswrapper[4035]: I0319 09:16:31.499895 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.500226 master-0 kubenswrapper[4035]: I0319 09:16:31.499936 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.500226 master-0 kubenswrapper[4035]: I0319 09:16:31.499979 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.500226 master-0 kubenswrapper[4035]: I0319 09:16:31.500081 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.500642 master-0 kubenswrapper[4035]: I0319 09:16:31.500246 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.501041 master-0 kubenswrapper[4035]: I0319 09:16:31.500978 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.501123 master-0 kubenswrapper[4035]: I0319 09:16:31.501042 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.501123 master-0 kubenswrapper[4035]: I0319 09:16:31.501076 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:31.501123 master-0 kubenswrapper[4035]: I0319 09:16:31.501109 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:31.501371 master-0 kubenswrapper[4035]: I0319 09:16:31.501143 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.501371 master-0 kubenswrapper[4035]: I0319 09:16:31.501174 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.501371 master-0 kubenswrapper[4035]: I0319 09:16:31.501204 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.501371 master-0 kubenswrapper[4035]: I0319 09:16:31.501234 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.501371 master-0 kubenswrapper[4035]: I0319 09:16:31.501264 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.600111 master-0 kubenswrapper[4035]: I0319 09:16:31.600002 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:31.601314 master-0 kubenswrapper[4035]: I0319 09:16:31.601236 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:31.601314 master-0 kubenswrapper[4035]: I0319 09:16:31.601284 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:31.601314 master-0 kubenswrapper[4035]: I0319 09:16:31.601302 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:31.601604 master-0 kubenswrapper[4035]: I0319 09:16:31.601347 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:31.601604 master-0 kubenswrapper[4035]: I0319 09:16:31.601431 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.601604 master-0 kubenswrapper[4035]: I0319 09:16:31.601498 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601619 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601623 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601698 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601720 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601719 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601743 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601765 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601789 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601790 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601812 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.601825 master-0 kubenswrapper[4035]: I0319 09:16:31.601840 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601848 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601864 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601856 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601891 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601917 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601955 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601980 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601978 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601998 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.602016 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.601882 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.602055 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.602108 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: E0319 09:16:31.602140 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.602159 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.602218 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.602698 master-0 kubenswrapper[4035]: I0319 09:16:31.602248 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.604000 master-0 kubenswrapper[4035]: I0319 09:16:31.602289 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.604000 master-0 kubenswrapper[4035]: I0319 09:16:31.602308 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.604000 master-0 kubenswrapper[4035]: I0319 09:16:31.602364 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.604000 master-0 kubenswrapper[4035]: I0319 09:16:31.602423 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.604000 master-0 kubenswrapper[4035]: I0319 09:16:31.602443 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.778989 master-0 kubenswrapper[4035]: I0319 09:16:31.778840 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:16:31.796874 master-0 kubenswrapper[4035]: I0319 09:16:31.796785 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:31.802270 master-0 kubenswrapper[4035]: E0319 09:16:31.802181 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 19 09:16:31.818687 master-0 kubenswrapper[4035]: I0319 09:16:31.818625 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:16:31.828838 master-0 kubenswrapper[4035]: I0319 09:16:31.828789 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:31.834723 master-0 kubenswrapper[4035]: I0319 09:16:31.834686 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:16:32.002824 master-0 kubenswrapper[4035]: I0319 09:16:32.002749 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:32.003756 master-0 kubenswrapper[4035]: I0319 09:16:32.003719 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:32.003756 master-0 kubenswrapper[4035]: I0319 09:16:32.003758 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:32.003888 master-0 kubenswrapper[4035]: I0319 09:16:32.003770 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:32.003888 master-0 kubenswrapper[4035]: I0319 09:16:32.003816 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:32.004715 master-0 kubenswrapper[4035]: E0319 09:16:32.004665 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 09:16:32.191852 master-0 kubenswrapper[4035]: I0319 09:16:32.191790 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:32.368876 master-0 kubenswrapper[4035]: W0319 09:16:32.368600 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8 WatchSource:0}: Error finding container c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8: Status 404 returned error can't find the container with id c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8 Mar 19 09:16:32.376084 master-0 kubenswrapper[4035]: I0319 09:16:32.375816 4035 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 09:16:32.380142 master-0 kubenswrapper[4035]: W0319 09:16:32.380085 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2 WatchSource:0}: Error finding container aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2: Status 404 returned error can't find the container with id aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2 Mar 19 09:16:32.382475 master-0 kubenswrapper[4035]: W0319 09:16:32.382365 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:32.382651 master-0 kubenswrapper[4035]: E0319 09:16:32.382478 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:32.432116 master-0 kubenswrapper[4035]: W0319 09:16:32.432044 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611 WatchSource:0}: Error finding container e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611: Status 404 returned error can't find the container with id e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611 Mar 19 09:16:32.448958 master-0 kubenswrapper[4035]: W0319 09:16:32.448890 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00 WatchSource:0}: Error finding container 9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00: Status 404 returned error can't find the container with id 9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00 Mar 19 09:16:32.469070 master-0 kubenswrapper[4035]: W0319 09:16:32.469004 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6 WatchSource:0}: Error finding container 28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6: Status 404 returned error can't find the container with id 28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6 Mar 19 09:16:32.540499 master-0 kubenswrapper[4035]: W0319 09:16:32.540397 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:32.540499 master-0 kubenswrapper[4035]: E0319 09:16:32.540478 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:32.603806 master-0 kubenswrapper[4035]: E0319 09:16:32.603700 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 19 09:16:32.661588 master-0 kubenswrapper[4035]: W0319 09:16:32.661178 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:32.661707 master-0 kubenswrapper[4035]: E0319 09:16:32.661604 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:32.680585 master-0 kubenswrapper[4035]: W0319 09:16:32.680448 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:32.680795 master-0 kubenswrapper[4035]: E0319 09:16:32.680598 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:32.805376 master-0 kubenswrapper[4035]: I0319 09:16:32.805221 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:32.809774 master-0 kubenswrapper[4035]: I0319 09:16:32.809658 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:32.809774 master-0 kubenswrapper[4035]: I0319 09:16:32.809723 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:32.809774 master-0 kubenswrapper[4035]: I0319 09:16:32.809742 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:32.809774 master-0 kubenswrapper[4035]: I0319 09:16:32.809796 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:32.810922 master-0 kubenswrapper[4035]: E0319 09:16:32.810826 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 09:16:33.139846 master-0 kubenswrapper[4035]: I0319 09:16:33.139770 4035 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 09:16:33.141041 master-0 kubenswrapper[4035]: E0319 09:16:33.141003 4035 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:33.190480 master-0 kubenswrapper[4035]: I0319 09:16:33.190383 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:33.340463 master-0 kubenswrapper[4035]: I0319 09:16:33.340369 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6"} Mar 19 09:16:33.341393 master-0 kubenswrapper[4035]: I0319 09:16:33.341354 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00"} Mar 19 09:16:33.342227 master-0 kubenswrapper[4035]: I0319 09:16:33.342207 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611"} Mar 19 09:16:33.343177 master-0 kubenswrapper[4035]: I0319 09:16:33.343155 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2"} Mar 19 09:16:33.344727 master-0 kubenswrapper[4035]: I0319 09:16:33.344687 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8"} Mar 19 09:16:34.192392 master-0 kubenswrapper[4035]: I0319 09:16:34.192207 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:34.205191 master-0 kubenswrapper[4035]: E0319 09:16:34.205100 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 19 09:16:34.353476 master-0 kubenswrapper[4035]: I0319 09:16:34.353411 4035 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="b213f6d8da0d4384e45f89c17fb5962fd352a3cea0a7f3f8261c476ba746dbca" exitCode=0 Mar 19 09:16:34.353476 master-0 kubenswrapper[4035]: I0319 09:16:34.353466 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"b213f6d8da0d4384e45f89c17fb5962fd352a3cea0a7f3f8261c476ba746dbca"} Mar 19 09:16:34.354253 master-0 kubenswrapper[4035]: I0319 09:16:34.353582 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:34.354253 master-0 kubenswrapper[4035]: I0319 09:16:34.354195 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:34.354253 master-0 kubenswrapper[4035]: I0319 09:16:34.354216 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:34.354253 master-0 kubenswrapper[4035]: I0319 09:16:34.354226 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:34.411691 master-0 kubenswrapper[4035]: I0319 09:16:34.411587 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:34.412485 master-0 kubenswrapper[4035]: I0319 09:16:34.412447 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:34.412485 master-0 kubenswrapper[4035]: I0319 09:16:34.412471 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:34.412485 master-0 kubenswrapper[4035]: I0319 09:16:34.412479 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:34.412765 master-0 kubenswrapper[4035]: I0319 09:16:34.412512 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:34.413074 master-0 kubenswrapper[4035]: E0319 09:16:34.413036 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 09:16:34.552781 master-0 kubenswrapper[4035]: W0319 09:16:34.552631 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:34.552781 master-0 kubenswrapper[4035]: E0319 09:16:34.552724 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:35.190887 master-0 kubenswrapper[4035]: I0319 09:16:35.190818 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:35.218429 master-0 kubenswrapper[4035]: W0319 09:16:35.218300 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:35.218591 master-0 kubenswrapper[4035]: E0319 09:16:35.218429 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:35.357674 master-0 kubenswrapper[4035]: I0319 09:16:35.357635 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 19 09:16:35.364673 master-0 kubenswrapper[4035]: I0319 09:16:35.364576 4035 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="418f6ae2573847d08c66b808d1e222bf4ba0199d6b94aaedcfd0af9254406aac" exitCode=1 Mar 19 09:16:35.364740 master-0 kubenswrapper[4035]: I0319 09:16:35.364674 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"418f6ae2573847d08c66b808d1e222bf4ba0199d6b94aaedcfd0af9254406aac"} Mar 19 09:16:35.364740 master-0 kubenswrapper[4035]: I0319 09:16:35.364693 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:35.365632 master-0 kubenswrapper[4035]: I0319 09:16:35.365605 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:35.365632 master-0 kubenswrapper[4035]: I0319 09:16:35.365628 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:35.365632 master-0 kubenswrapper[4035]: I0319 09:16:35.365637 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:35.366116 master-0 kubenswrapper[4035]: I0319 09:16:35.365890 4035 scope.go:117] "RemoveContainer" containerID="418f6ae2573847d08c66b808d1e222bf4ba0199d6b94aaedcfd0af9254406aac" Mar 19 09:16:35.367648 master-0 kubenswrapper[4035]: I0319 09:16:35.367622 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc"} Mar 19 09:16:35.481452 master-0 kubenswrapper[4035]: W0319 09:16:35.481348 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:35.481645 master-0 kubenswrapper[4035]: E0319 09:16:35.481461 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:35.796908 master-0 kubenswrapper[4035]: W0319 09:16:35.796760 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:35.796908 master-0 kubenswrapper[4035]: E0319 09:16:35.796838 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:36.191517 master-0 kubenswrapper[4035]: I0319 09:16:36.191467 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:36.372532 master-0 kubenswrapper[4035]: I0319 09:16:36.372463 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1"} Mar 19 09:16:36.373087 master-0 kubenswrapper[4035]: I0319 09:16:36.372575 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:36.373340 master-0 kubenswrapper[4035]: I0319 09:16:36.373302 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:36.373340 master-0 kubenswrapper[4035]: I0319 09:16:36.373332 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:36.373340 master-0 kubenswrapper[4035]: I0319 09:16:36.373340 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:36.373971 master-0 kubenswrapper[4035]: I0319 09:16:36.373941 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 19 09:16:36.374391 master-0 kubenswrapper[4035]: I0319 09:16:36.374364 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 19 09:16:36.375088 master-0 kubenswrapper[4035]: I0319 09:16:36.375052 4035 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="3874bd2dbcba22f32461767224041630002537bbb0bc13b97ad7b4590b8aad83" exitCode=1 Mar 19 09:16:36.375155 master-0 kubenswrapper[4035]: I0319 09:16:36.375102 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"3874bd2dbcba22f32461767224041630002537bbb0bc13b97ad7b4590b8aad83"} Mar 19 09:16:36.375155 master-0 kubenswrapper[4035]: I0319 09:16:36.375119 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:36.375155 master-0 kubenswrapper[4035]: I0319 09:16:36.375150 4035 scope.go:117] "RemoveContainer" containerID="418f6ae2573847d08c66b808d1e222bf4ba0199d6b94aaedcfd0af9254406aac" Mar 19 09:16:36.375822 master-0 kubenswrapper[4035]: I0319 09:16:36.375800 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:36.375822 master-0 kubenswrapper[4035]: I0319 09:16:36.375827 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:36.375920 master-0 kubenswrapper[4035]: I0319 09:16:36.375835 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:36.376056 master-0 kubenswrapper[4035]: I0319 09:16:36.376037 4035 scope.go:117] "RemoveContainer" containerID="3874bd2dbcba22f32461767224041630002537bbb0bc13b97ad7b4590b8aad83" Mar 19 09:16:36.376171 master-0 kubenswrapper[4035]: E0319 09:16:36.376149 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 09:16:37.191051 master-0 kubenswrapper[4035]: I0319 09:16:37.191004 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:37.316309 master-0 kubenswrapper[4035]: I0319 09:16:37.316248 4035 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 09:16:37.317576 master-0 kubenswrapper[4035]: E0319 09:16:37.317520 4035 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:37.380203 master-0 kubenswrapper[4035]: I0319 09:16:37.380147 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 19 09:16:37.380781 master-0 kubenswrapper[4035]: I0319 09:16:37.380538 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:37.380781 master-0 kubenswrapper[4035]: I0319 09:16:37.380616 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:37.381334 master-0 kubenswrapper[4035]: I0319 09:16:37.381304 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:37.381334 master-0 kubenswrapper[4035]: I0319 09:16:37.381333 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:37.381416 master-0 kubenswrapper[4035]: I0319 09:16:37.381342 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:37.381660 master-0 kubenswrapper[4035]: I0319 09:16:37.381648 4035 scope.go:117] "RemoveContainer" containerID="3874bd2dbcba22f32461767224041630002537bbb0bc13b97ad7b4590b8aad83" Mar 19 09:16:37.381798 master-0 kubenswrapper[4035]: E0319 09:16:37.381780 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 09:16:37.381929 master-0 kubenswrapper[4035]: I0319 09:16:37.381915 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:37.381973 master-0 kubenswrapper[4035]: I0319 09:16:37.381938 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:37.381973 master-0 kubenswrapper[4035]: I0319 09:16:37.381946 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:37.406429 master-0 kubenswrapper[4035]: E0319 09:16:37.406394 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 19 09:16:37.613903 master-0 kubenswrapper[4035]: I0319 09:16:37.613859 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:37.614773 master-0 kubenswrapper[4035]: I0319 09:16:37.614735 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:37.614829 master-0 kubenswrapper[4035]: I0319 09:16:37.614783 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:37.614829 master-0 kubenswrapper[4035]: I0319 09:16:37.614797 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:37.614884 master-0 kubenswrapper[4035]: I0319 09:16:37.614845 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:37.615707 master-0 kubenswrapper[4035]: E0319 09:16:37.615669 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 19 09:16:37.965374 master-0 kubenswrapper[4035]: E0319 09:16:37.965166 4035 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e33602020e959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.179458905 +0000 UTC m=+0.638073876,LastTimestamp:2026-03-19 09:16:31.179458905 +0000 UTC m=+0.638073876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:38.191374 master-0 kubenswrapper[4035]: I0319 09:16:38.191301 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:38.591342 master-0 kubenswrapper[4035]: W0319 09:16:38.591269 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:38.591342 master-0 kubenswrapper[4035]: E0319 09:16:38.591329 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:38.619202 master-0 kubenswrapper[4035]: W0319 09:16:38.619122 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:38.619202 master-0 kubenswrapper[4035]: E0319 09:16:38.619154 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:38.930896 master-0 kubenswrapper[4035]: W0319 09:16:38.930758 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:38.930896 master-0 kubenswrapper[4035]: E0319 09:16:38.930848 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 19 09:16:39.192086 master-0 kubenswrapper[4035]: I0319 09:16:39.191411 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 19 09:16:39.386508 master-0 kubenswrapper[4035]: I0319 09:16:39.386440 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:39.386508 master-0 kubenswrapper[4035]: I0319 09:16:39.386437 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"0fa64701f5e06185b54d04000e8eff35b5351d75655dd3a6eb6ffaa3f06a93bd"} Mar 19 09:16:39.387242 master-0 kubenswrapper[4035]: I0319 09:16:39.387207 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:39.387302 master-0 kubenswrapper[4035]: I0319 09:16:39.387267 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:39.387302 master-0 kubenswrapper[4035]: I0319 09:16:39.387291 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:39.387858 master-0 kubenswrapper[4035]: I0319 09:16:39.387820 4035 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="00add47a2cdec59c3ac383946429a4dc013519a6933bbb0d7ebdd58eb0eb7186" exitCode=0 Mar 19 09:16:39.387907 master-0 kubenswrapper[4035]: I0319 09:16:39.387883 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"00add47a2cdec59c3ac383946429a4dc013519a6933bbb0d7ebdd58eb0eb7186"} Mar 19 09:16:39.387950 master-0 kubenswrapper[4035]: I0319 09:16:39.387931 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:39.388778 master-0 kubenswrapper[4035]: I0319 09:16:39.388755 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:39.388778 master-0 kubenswrapper[4035]: I0319 09:16:39.388779 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:39.388875 master-0 kubenswrapper[4035]: I0319 09:16:39.388788 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:39.389637 master-0 kubenswrapper[4035]: I0319 09:16:39.389614 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"e936e2d314dab9154842440cf41e00874f26fcc073cf860d24367374f28b489d"} Mar 19 09:16:39.390869 master-0 kubenswrapper[4035]: I0319 09:16:39.390851 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:39.391459 master-0 kubenswrapper[4035]: I0319 09:16:39.391428 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:39.391510 master-0 kubenswrapper[4035]: I0319 09:16:39.391462 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:39.391510 master-0 kubenswrapper[4035]: I0319 09:16:39.391475 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:40.395219 master-0 kubenswrapper[4035]: I0319 09:16:40.395170 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:40.395668 master-0 kubenswrapper[4035]: I0319 09:16:40.395235 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"0ea74be9ce6a8db82cc76cb8b1abbace62eee2a97494f9a8b0c0af4311285f49"} Mar 19 09:16:40.396312 master-0 kubenswrapper[4035]: I0319 09:16:40.396278 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:40.396366 master-0 kubenswrapper[4035]: I0319 09:16:40.396328 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:40.396366 master-0 kubenswrapper[4035]: I0319 09:16:40.396341 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:41.074630 master-0 kubenswrapper[4035]: W0319 09:16:41.069770 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 09:16:41.074630 master-0 kubenswrapper[4035]: E0319 09:16:41.069852 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 09:16:41.074630 master-0 kubenswrapper[4035]: I0319 09:16:41.069787 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:41.195927 master-0 kubenswrapper[4035]: I0319 09:16:41.195864 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:41.298917 master-0 kubenswrapper[4035]: E0319 09:16:41.298702 4035 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 09:16:42.194604 master-0 kubenswrapper[4035]: I0319 09:16:42.194560 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:42.413397 master-0 kubenswrapper[4035]: I0319 09:16:42.413338 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"45039086b1bdf7c8b135828088ebf13ff393c5333b5272f1cf3328f195ddea5b"} Mar 19 09:16:42.413397 master-0 kubenswrapper[4035]: I0319 09:16:42.413399 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:42.414324 master-0 kubenswrapper[4035]: I0319 09:16:42.414232 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:42.414324 master-0 kubenswrapper[4035]: I0319 09:16:42.414269 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:42.414324 master-0 kubenswrapper[4035]: I0319 09:16:42.414282 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:42.731647 master-0 kubenswrapper[4035]: I0319 09:16:42.731594 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:42.859675 master-0 kubenswrapper[4035]: I0319 09:16:42.859627 4035 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:42.863861 master-0 kubenswrapper[4035]: I0319 09:16:42.863824 4035 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:43.059281 master-0 kubenswrapper[4035]: I0319 09:16:43.059213 4035 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:43.062989 master-0 kubenswrapper[4035]: I0319 09:16:43.062953 4035 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:43.197612 master-0 kubenswrapper[4035]: I0319 09:16:43.197460 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:43.419028 master-0 kubenswrapper[4035]: I0319 09:16:43.418926 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"4ff4b935126cc5d750c1d850d7bd8bc2f70fd6fa92c703e7c39a069db8572af3"} Mar 19 09:16:43.419028 master-0 kubenswrapper[4035]: I0319 09:16:43.418962 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:43.419028 master-0 kubenswrapper[4035]: I0319 09:16:43.419023 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:43.419028 master-0 kubenswrapper[4035]: I0319 09:16:43.418953 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:43.420233 master-0 kubenswrapper[4035]: I0319 09:16:43.420124 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:43.420233 master-0 kubenswrapper[4035]: I0319 09:16:43.420188 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:43.420233 master-0 kubenswrapper[4035]: I0319 09:16:43.420206 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:43.424458 master-0 kubenswrapper[4035]: I0319 09:16:43.424399 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:43.424458 master-0 kubenswrapper[4035]: I0319 09:16:43.424451 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:43.424458 master-0 kubenswrapper[4035]: I0319 09:16:43.424464 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:43.814062 master-0 kubenswrapper[4035]: E0319 09:16:43.813951 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 09:16:44.016177 master-0 kubenswrapper[4035]: I0319 09:16:44.016078 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:44.018035 master-0 kubenswrapper[4035]: I0319 09:16:44.017961 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:44.018168 master-0 kubenswrapper[4035]: I0319 09:16:44.018047 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:44.018168 master-0 kubenswrapper[4035]: I0319 09:16:44.018071 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:44.018168 master-0 kubenswrapper[4035]: I0319 09:16:44.018146 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:44.026805 master-0 kubenswrapper[4035]: E0319 09:16:44.026760 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 19 09:16:44.196672 master-0 kubenswrapper[4035]: I0319 09:16:44.196601 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:44.421710 master-0 kubenswrapper[4035]: I0319 09:16:44.421605 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:44.422701 master-0 kubenswrapper[4035]: I0319 09:16:44.421764 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:44.423161 master-0 kubenswrapper[4035]: I0319 09:16:44.423101 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:44.423245 master-0 kubenswrapper[4035]: I0319 09:16:44.423164 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:44.423245 master-0 kubenswrapper[4035]: I0319 09:16:44.423190 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:44.423370 master-0 kubenswrapper[4035]: I0319 09:16:44.423315 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:44.423370 master-0 kubenswrapper[4035]: I0319 09:16:44.423357 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:44.423523 master-0 kubenswrapper[4035]: I0319 09:16:44.423375 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:45.196235 master-0 kubenswrapper[4035]: I0319 09:16:45.196157 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:45.647965 master-0 kubenswrapper[4035]: I0319 09:16:45.647867 4035 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 19 09:16:45.664502 master-0 kubenswrapper[4035]: I0319 09:16:45.664448 4035 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 19 09:16:46.197986 master-0 kubenswrapper[4035]: I0319 09:16:46.197886 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:47.071910 master-0 kubenswrapper[4035]: W0319 09:16:47.071801 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 19 09:16:47.072757 master-0 kubenswrapper[4035]: E0319 09:16:47.071913 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 09:16:47.198119 master-0 kubenswrapper[4035]: I0319 09:16:47.198051 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:47.238032 master-0 kubenswrapper[4035]: I0319 09:16:47.237981 4035 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:47.238290 master-0 kubenswrapper[4035]: I0319 09:16:47.238139 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:47.239476 master-0 kubenswrapper[4035]: I0319 09:16:47.239432 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:47.239476 master-0 kubenswrapper[4035]: I0319 09:16:47.239473 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:47.239476 master-0 kubenswrapper[4035]: I0319 09:16:47.239485 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:47.246066 master-0 kubenswrapper[4035]: I0319 09:16:47.246013 4035 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:47.428059 master-0 kubenswrapper[4035]: I0319 09:16:47.427962 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:47.428387 master-0 kubenswrapper[4035]: I0319 09:16:47.428247 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:47.429026 master-0 kubenswrapper[4035]: I0319 09:16:47.428966 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:47.429026 master-0 kubenswrapper[4035]: I0319 09:16:47.429002 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:47.429026 master-0 kubenswrapper[4035]: I0319 09:16:47.429011 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:47.434680 master-0 kubenswrapper[4035]: I0319 09:16:47.434605 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:16:47.685838 master-0 kubenswrapper[4035]: W0319 09:16:47.685665 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:47.685838 master-0 kubenswrapper[4035]: E0319 09:16:47.685733 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 19 09:16:47.973090 master-0 kubenswrapper[4035]: E0319 09:16:47.972887 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602020e959 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.179458905 +0000 UTC m=+0.638073876,LastTimestamp:2026-03-19 09:16:31.179458905 +0000 UTC m=+0.638073876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:47.979408 master-0 kubenswrapper[4035]: E0319 09:16:47.979284 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:47.986655 master-0 kubenswrapper[4035]: E0319 09:16:47.986569 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:47.993927 master-0 kubenswrapper[4035]: E0319 09:16:47.993723 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e336023441059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,LastTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.001598 master-0 kubenswrapper[4035]: E0319 09:16:48.001377 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602740d23d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.298990653 +0000 UTC m=+0.757605594,LastTimestamp:2026-03-19 09:16:31.298990653 +0000 UTC m=+0.757605594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.008927 master-0 kubenswrapper[4035]: E0319 09:16:48.008720 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602342e296\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.397949476 +0000 UTC m=+0.856564427,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.014576 master-0 kubenswrapper[4035]: E0319 09:16:48.014418 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602343bcd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.397977324 +0000 UTC m=+0.856592285,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.019381 master-0 kubenswrapper[4035]: E0319 09:16:48.019177 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e336023441059\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e336023441059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,LastTimestamp:2026-03-19 09:16:31.397991864 +0000 UTC m=+0.856606825,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.024048 master-0 kubenswrapper[4035]: E0319 09:16:48.023874 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602342e296\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.435112576 +0000 UTC m=+0.893727527,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.029368 master-0 kubenswrapper[4035]: E0319 09:16:48.029191 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602343bcd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.435135351 +0000 UTC m=+0.893750302,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.040586 master-0 kubenswrapper[4035]: E0319 09:16:48.039717 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e336023441059\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e336023441059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,LastTimestamp:2026-03-19 09:16:31.435147909 +0000 UTC m=+0.893762860,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.047913 master-0 kubenswrapper[4035]: E0319 09:16:48.047776 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602342e296\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.436096333 +0000 UTC m=+0.894711274,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.057303 master-0 kubenswrapper[4035]: E0319 09:16:48.057156 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602343bcd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.436114385 +0000 UTC m=+0.894729326,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.065488 master-0 kubenswrapper[4035]: E0319 09:16:48.065279 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e336023441059\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e336023441059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,LastTimestamp:2026-03-19 09:16:31.436123411 +0000 UTC m=+0.894738352,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.071020 master-0 kubenswrapper[4035]: E0319 09:16:48.070910 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602342e296\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.436341214 +0000 UTC m=+0.894956195,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.077607 master-0 kubenswrapper[4035]: E0319 09:16:48.077505 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602343bcd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.436379899 +0000 UTC m=+0.894994890,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.082874 master-0 kubenswrapper[4035]: E0319 09:16:48.082717 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e336023441059\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e336023441059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,LastTimestamp:2026-03-19 09:16:31.436401394 +0000 UTC m=+0.895016375,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.088794 master-0 kubenswrapper[4035]: E0319 09:16:48.088682 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602342e296\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.437258166 +0000 UTC m=+0.895873127,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.094045 master-0 kubenswrapper[4035]: E0319 09:16:48.093877 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602343bcd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.437285644 +0000 UTC m=+0.895900606,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.099712 master-0 kubenswrapper[4035]: E0319 09:16:48.099473 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e336023441059\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e336023441059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,LastTimestamp:2026-03-19 09:16:31.437298873 +0000 UTC m=+0.895913834,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.105642 master-0 kubenswrapper[4035]: E0319 09:16:48.105403 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602342e296\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.438065397 +0000 UTC m=+0.896680338,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.111057 master-0 kubenswrapper[4035]: E0319 09:16:48.110937 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602343bcd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.438079426 +0000 UTC m=+0.896694367,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.116311 master-0 kubenswrapper[4035]: E0319 09:16:48.116181 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e336023441059\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e336023441059 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232094297 +0000 UTC m=+0.690709268,LastTimestamp:2026-03-19 09:16:31.438089092 +0000 UTC m=+0.896704033,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.121315 master-0 kubenswrapper[4035]: E0319 09:16:48.121178 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602342e296\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602342e296 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232017046 +0000 UTC m=+0.690632067,LastTimestamp:2026-03-19 09:16:31.438528732 +0000 UTC m=+0.897143673,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.126356 master-0 kubenswrapper[4035]: E0319 09:16:48.126219 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e33602343bcd0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e33602343bcd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:31.232072912 +0000 UTC m=+0.690687894,LastTimestamp:2026-03-19 09:16:31.438559382 +0000 UTC m=+0.897174323,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.133603 master-0 kubenswrapper[4035]: E0319 09:16:48.133412 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3360676ed2c4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:32.375747268 +0000 UTC m=+1.834362209,LastTimestamp:2026-03-19 09:16:32.375747268 +0000 UTC m=+1.834362209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.139946 master-0 kubenswrapper[4035]: E0319 09:16:48.139818 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e336067c8352c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:32.381605164 +0000 UTC m=+1.840220105,LastTimestamp:2026-03-19 09:16:32.381605164 +0000 UTC m=+1.840220105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.144629 master-0 kubenswrapper[4035]: E0319 09:16:48.144519 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e33606b19d6d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:32.437286609 +0000 UTC m=+1.895901560,LastTimestamp:2026-03-19 09:16:32.437286609 +0000 UTC m=+1.895901560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.149070 master-0 kubenswrapper[4035]: E0319 09:16:48.148940 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e33606be0d44b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:32.450327627 +0000 UTC m=+1.908942568,LastTimestamp:2026-03-19 09:16:32.450327627 +0000 UTC m=+1.908942568,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.153464 master-0 kubenswrapper[4035]: E0319 09:16:48.153373 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e33606d1a9776 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:32.470890358 +0000 UTC m=+1.929505309,LastTimestamp:2026-03-19 09:16:32.470890358 +0000 UTC m=+1.929505309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.158817 master-0 kubenswrapper[4035]: E0319 09:16:48.158723 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3360c443978c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 1.557s (1.557s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:33.933195148 +0000 UTC m=+3.391810089,LastTimestamp:2026-03-19 09:16:33.933195148 +0000 UTC m=+3.391810089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.163313 master-0 kubenswrapper[4035]: E0319 09:16:48.163169 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3360cf90e5b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:34.122810805 +0000 UTC m=+3.581425746,LastTimestamp:2026-03-19 09:16:34.122810805 +0000 UTC m=+3.581425746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.168165 master-0 kubenswrapper[4035]: E0319 09:16:48.168016 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3360d0ae340b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:34.141508619 +0000 UTC m=+3.600123560,LastTimestamp:2026-03-19 09:16:34.141508619 +0000 UTC m=+3.600123560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.173400 master-0 kubenswrapper[4035]: E0319 09:16:48.173247 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3360fc7fa810 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:34.876655632 +0000 UTC m=+4.335270573,LastTimestamp:2026-03-19 09:16:34.876655632 +0000 UTC m=+4.335270573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.178198 master-0 kubenswrapper[4035]: E0319 09:16:48.178069 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3360ff80eef8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 2.476s (2.476s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:34.927070968 +0000 UTC m=+4.385685909,LastTimestamp:2026-03-19 09:16:34.927070968 +0000 UTC m=+4.385685909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.182124 master-0 kubenswrapper[4035]: E0319 09:16:48.182010 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336107bf4c9e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.065375902 +0000 UTC m=+4.523990843,LastTimestamp:2026-03-19 09:16:35.065375902 +0000 UTC m=+4.523990843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.186795 master-0 kubenswrapper[4035]: E0319 09:16:48.186683 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336108b0d620 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.08120528 +0000 UTC m=+4.539820221,LastTimestamp:2026-03-19 09:16:35.08120528 +0000 UTC m=+4.539820221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.191689 master-0 kubenswrapper[4035]: I0319 09:16:48.191638 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:48.191866 master-0 kubenswrapper[4035]: E0319 09:16:48.191696 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e33610b390969 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.123685737 +0000 UTC m=+4.582300678,LastTimestamp:2026-03-19 09:16:35.123685737 +0000 UTC m=+4.582300678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.196265 master-0 kubenswrapper[4035]: E0319 09:16:48.196131 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e33610c2a47af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.139495855 +0000 UTC m=+4.598110796,LastTimestamp:2026-03-19 09:16:35.139495855 +0000 UTC m=+4.598110796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.201163 master-0 kubenswrapper[4035]: E0319 09:16:48.201052 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e33610c49646b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.141534827 +0000 UTC m=+4.600149768,LastTimestamp:2026-03-19 09:16:35.141534827 +0000 UTC m=+4.600149768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.205705 master-0 kubenswrapper[4035]: E0319 09:16:48.205566 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e336118aac669 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.349243497 +0000 UTC m=+4.807858438,LastTimestamp:2026-03-19 09:16:35.349243497 +0000 UTC m=+4.807858438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.209755 master-0 kubenswrapper[4035]: E0319 09:16:48.209686 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e3361199e3de1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.365199329 +0000 UTC m=+4.823814270,LastTimestamp:2026-03-19 09:16:35.365199329 +0000 UTC m=+4.823814270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.213504 master-0 kubenswrapper[4035]: E0319 09:16:48.213357 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3360fc7fa810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3360fc7fa810 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:34.876655632 +0000 UTC m=+4.335270573,LastTimestamp:2026-03-19 09:16:35.368995723 +0000 UTC m=+4.827610664,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.218027 master-0 kubenswrapper[4035]: E0319 09:16:48.217965 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e336107bf4c9e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336107bf4c9e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.065375902 +0000 UTC m=+4.523990843,LastTimestamp:2026-03-19 09:16:35.549945229 +0000 UTC m=+5.008560170,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.226040 master-0 kubenswrapper[4035]: E0319 09:16:48.225796 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e336108b0d620\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336108b0d620 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.08120528 +0000 UTC m=+4.539820221,LastTimestamp:2026-03-19 09:16:35.559520507 +0000 UTC m=+5.018135448,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.234371 master-0 kubenswrapper[4035]: E0319 09:16:48.234259 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336155dfa860 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:36.376119392 +0000 UTC m=+5.834734333,LastTimestamp:2026-03-19 09:16:36.376119392 +0000 UTC m=+5.834734333,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.238415 master-0 kubenswrapper[4035]: E0319 09:16:48.238281 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e336155dfa860\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336155dfa860 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:36.376119392 +0000 UTC m=+5.834734333,LastTimestamp:2026-03-19 09:16:37.381752404 +0000 UTC m=+6.840367345,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.247200 master-0 kubenswrapper[4035]: E0319 09:16:48.246908 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e3361f12670b9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 6.51s (6.51s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:38.981226681 +0000 UTC m=+8.439841632,LastTimestamp:2026-03-19 09:16:38.981226681 +0000 UTC m=+8.439841632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.254299 master-0 kubenswrapper[4035]: E0319 09:16:48.254168 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3361f2d02dad kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 6.627s (6.627s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.009127853 +0000 UTC m=+8.467742804,LastTimestamp:2026-03-19 09:16:39.009127853 +0000 UTC m=+8.467742804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.259769 master-0 kubenswrapper[4035]: E0319 09:16:48.259647 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3361f4761d3a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 6.599s (6.599s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.036779834 +0000 UTC m=+8.495394815,LastTimestamp:2026-03-19 09:16:39.036779834 +0000 UTC m=+8.495394815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.263942 master-0 kubenswrapper[4035]: E0319 09:16:48.263870 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e3361fcf5db02 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.179369218 +0000 UTC m=+8.637984169,LastTimestamp:2026-03-19 09:16:39.179369218 +0000 UTC m=+8.637984169,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.268795 master-0 kubenswrapper[4035]: E0319 09:16:48.268718 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e3361fd9fc225 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.190503973 +0000 UTC m=+8.649118924,LastTimestamp:2026-03-19 09:16:39.190503973 +0000 UTC m=+8.649118924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.272779 master-0 kubenswrapper[4035]: E0319 09:16:48.272716 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3361fdf50798 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.196092312 +0000 UTC m=+8.654707253,LastTimestamp:2026-03-19 09:16:39.196092312 +0000 UTC m=+8.654707253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.277279 master-0 kubenswrapper[4035]: E0319 09:16:48.277197 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3361fe23f625 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.199168037 +0000 UTC m=+8.657782978,LastTimestamp:2026-03-19 09:16:39.199168037 +0000 UTC m=+8.657782978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.282243 master-0 kubenswrapper[4035]: E0319 09:16:48.282132 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3361fead3b27 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.208164135 +0000 UTC m=+8.666779076,LastTimestamp:2026-03-19 09:16:39.208164135 +0000 UTC m=+8.666779076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.287337 master-0 kubenswrapper[4035]: E0319 09:16:48.287178 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3361feb63a92 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.20875381 +0000 UTC m=+8.667368741,LastTimestamp:2026-03-19 09:16:39.20875381 +0000 UTC m=+8.667368741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.292335 master-0 kubenswrapper[4035]: E0319 09:16:48.292249 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3361fec2ff74 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.209590644 +0000 UTC m=+8.668205585,LastTimestamp:2026-03-19 09:16:39.209590644 +0000 UTC m=+8.668205585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.297145 master-0 kubenswrapper[4035]: E0319 09:16:48.296659 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3362099018ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.390804207 +0000 UTC m=+8.849419148,LastTimestamp:2026-03-19 09:16:39.390804207 +0000 UTC m=+8.849419148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.300148 master-0 kubenswrapper[4035]: E0319 09:16:48.300073 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e33621496f3e9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.575802857 +0000 UTC m=+9.034417798,LastTimestamp:2026-03-19 09:16:39.575802857 +0000 UTC m=+9.034417798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.303759 master-0 kubenswrapper[4035]: E0319 09:16:48.303676 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3362156f4be5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.589981157 +0000 UTC m=+9.048596118,LastTimestamp:2026-03-19 09:16:39.589981157 +0000 UTC m=+9.048596118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.308257 master-0 kubenswrapper[4035]: E0319 09:16:48.308133 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e336215818cc7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:39.591177415 +0000 UTC m=+9.049792366,LastTimestamp:2026-03-19 09:16:39.591177415 +0000 UTC m=+9.049792366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.311900 master-0 kubenswrapper[4035]: E0319 09:16:48.311823 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e336298d4db14 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 2.584s (2.584s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:41.794452244 +0000 UTC m=+11.253067235,LastTimestamp:2026-03-19 09:16:41.794452244 +0000 UTC m=+11.253067235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.315423 master-0 kubenswrapper[4035]: E0319 09:16:48.315348 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3362a6d9ab7e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:42.029648766 +0000 UTC m=+11.488263707,LastTimestamp:2026-03-19 09:16:42.029648766 +0000 UTC m=+11.488263707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.319047 master-0 kubenswrapper[4035]: E0319 09:16:48.318947 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e3362a78b7f14 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:42.041302804 +0000 UTC m=+11.499917745,LastTimestamp:2026-03-19 09:16:42.041302804 +0000 UTC m=+11.499917745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.324938 master-0 kubenswrapper[4035]: E0319 09:16:48.324791 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3362dda06a92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 3.357s (3.357s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:42.948643474 +0000 UTC m=+12.407258415,LastTimestamp:2026-03-19 09:16:42.948643474 +0000 UTC m=+12.407258415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.329471 master-0 kubenswrapper[4035]: E0319 09:16:48.329323 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3362edb4e0c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:43.218419911 +0000 UTC m=+12.677034862,LastTimestamp:2026-03-19 09:16:43.218419911 +0000 UTC m=+12.677034862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.333979 master-0 kubenswrapper[4035]: E0319 09:16:48.333813 4035 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e3362f53d6c66 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:43.344809062 +0000 UTC m=+12.803424013,LastTimestamp:2026-03-19 09:16:43.344809062 +0000 UTC m=+12.803424013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:48.430911 master-0 kubenswrapper[4035]: I0319 09:16:48.430860 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:48.431801 master-0 kubenswrapper[4035]: I0319 09:16:48.431760 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:48.431875 master-0 kubenswrapper[4035]: I0319 09:16:48.431819 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:48.431875 master-0 kubenswrapper[4035]: I0319 09:16:48.431835 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:49.195410 master-0 kubenswrapper[4035]: I0319 09:16:49.195340 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:49.334214 master-0 kubenswrapper[4035]: I0319 09:16:49.334161 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:49.335263 master-0 kubenswrapper[4035]: I0319 09:16:49.335227 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:49.335263 master-0 kubenswrapper[4035]: I0319 09:16:49.335254 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:49.335263 master-0 kubenswrapper[4035]: I0319 09:16:49.335262 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:49.335844 master-0 kubenswrapper[4035]: I0319 09:16:49.335812 4035 scope.go:117] "RemoveContainer" containerID="3874bd2dbcba22f32461767224041630002537bbb0bc13b97ad7b4590b8aad83" Mar 19 09:16:49.343088 master-0 kubenswrapper[4035]: E0319 09:16:49.342947 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e3360fc7fa810\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e3360fc7fa810 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:34.876655632 +0000 UTC m=+4.335270573,LastTimestamp:2026-03-19 09:16:49.338441185 +0000 UTC m=+18.797056146,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:49.432873 master-0 kubenswrapper[4035]: I0319 09:16:49.432756 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:49.433874 master-0 kubenswrapper[4035]: I0319 09:16:49.433839 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:49.433939 master-0 kubenswrapper[4035]: I0319 09:16:49.433881 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:49.433939 master-0 kubenswrapper[4035]: I0319 09:16:49.433894 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:49.509256 master-0 kubenswrapper[4035]: E0319 09:16:49.509150 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e336107bf4c9e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336107bf4c9e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.065375902 +0000 UTC m=+4.523990843,LastTimestamp:2026-03-19 09:16:49.50262946 +0000 UTC m=+18.961244401,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:49.519167 master-0 kubenswrapper[4035]: E0319 09:16:49.519051 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e336108b0d620\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336108b0d620 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:35.08120528 +0000 UTC m=+4.539820221,LastTimestamp:2026-03-19 09:16:49.514776291 +0000 UTC m=+18.973391232,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:49.726661 master-0 kubenswrapper[4035]: W0319 09:16:49.726414 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 19 09:16:49.726661 master-0 kubenswrapper[4035]: E0319 09:16:49.726490 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 19 09:16:49.894126 master-0 kubenswrapper[4035]: I0319 09:16:49.894080 4035 csr.go:261] certificate signing request csr-lmdx5 is approved, waiting to be issued Mar 19 09:16:50.197158 master-0 kubenswrapper[4035]: I0319 09:16:50.197083 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:50.437320 master-0 kubenswrapper[4035]: I0319 09:16:50.437250 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 09:16:50.438201 master-0 kubenswrapper[4035]: I0319 09:16:50.438142 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 19 09:16:50.439127 master-0 kubenswrapper[4035]: I0319 09:16:50.439075 4035 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644" exitCode=1 Mar 19 09:16:50.439282 master-0 kubenswrapper[4035]: I0319 09:16:50.439142 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644"} Mar 19 09:16:50.439282 master-0 kubenswrapper[4035]: I0319 09:16:50.439205 4035 scope.go:117] "RemoveContainer" containerID="3874bd2dbcba22f32461767224041630002537bbb0bc13b97ad7b4590b8aad83" Mar 19 09:16:50.439971 master-0 kubenswrapper[4035]: I0319 09:16:50.439479 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:50.440973 master-0 kubenswrapper[4035]: I0319 09:16:50.440907 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:50.441139 master-0 kubenswrapper[4035]: I0319 09:16:50.440978 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:50.441139 master-0 kubenswrapper[4035]: I0319 09:16:50.441003 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:50.442300 master-0 kubenswrapper[4035]: I0319 09:16:50.441816 4035 scope.go:117] "RemoveContainer" containerID="c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644" Mar 19 09:16:50.442300 master-0 kubenswrapper[4035]: E0319 09:16:50.442100 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 09:16:50.448310 master-0 kubenswrapper[4035]: E0319 09:16:50.448047 4035 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e336155dfa860\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e336155dfa860 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:16:36.376119392 +0000 UTC m=+5.834734333,LastTimestamp:2026-03-19 09:16:50.442034525 +0000 UTC m=+19.900649496,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:16:50.819288 master-0 kubenswrapper[4035]: E0319 09:16:50.819142 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 09:16:51.027461 master-0 kubenswrapper[4035]: I0319 09:16:51.027331 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:51.028942 master-0 kubenswrapper[4035]: I0319 09:16:51.028901 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:51.029025 master-0 kubenswrapper[4035]: I0319 09:16:51.028959 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:51.029025 master-0 kubenswrapper[4035]: I0319 09:16:51.028976 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:51.029096 master-0 kubenswrapper[4035]: I0319 09:16:51.029034 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:51.036799 master-0 kubenswrapper[4035]: E0319 09:16:51.036416 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 19 09:16:51.198148 master-0 kubenswrapper[4035]: I0319 09:16:51.198057 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:51.299743 master-0 kubenswrapper[4035]: E0319 09:16:51.299648 4035 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 09:16:51.444007 master-0 kubenswrapper[4035]: I0319 09:16:51.443942 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 09:16:52.198586 master-0 kubenswrapper[4035]: I0319 09:16:52.198468 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:52.739031 master-0 kubenswrapper[4035]: I0319 09:16:52.738956 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:52.739251 master-0 kubenswrapper[4035]: I0319 09:16:52.739196 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:52.740503 master-0 kubenswrapper[4035]: I0319 09:16:52.740453 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:52.740503 master-0 kubenswrapper[4035]: I0319 09:16:52.740486 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:52.740503 master-0 kubenswrapper[4035]: I0319 09:16:52.740498 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:52.745514 master-0 kubenswrapper[4035]: I0319 09:16:52.745457 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:16:53.197815 master-0 kubenswrapper[4035]: I0319 09:16:53.197702 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:53.451055 master-0 kubenswrapper[4035]: I0319 09:16:53.450857 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:53.452088 master-0 kubenswrapper[4035]: I0319 09:16:53.452034 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:53.452088 master-0 kubenswrapper[4035]: I0319 09:16:53.452066 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:53.452088 master-0 kubenswrapper[4035]: I0319 09:16:53.452077 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:53.858820 master-0 kubenswrapper[4035]: W0319 09:16:53.858746 4035 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 19 09:16:53.859036 master-0 kubenswrapper[4035]: E0319 09:16:53.858825 4035 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 19 09:16:54.197495 master-0 kubenswrapper[4035]: I0319 09:16:54.197378 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:55.197926 master-0 kubenswrapper[4035]: I0319 09:16:55.197868 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:56.195455 master-0 kubenswrapper[4035]: I0319 09:16:56.195351 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:57.195918 master-0 kubenswrapper[4035]: I0319 09:16:57.195873 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:57.825402 master-0 kubenswrapper[4035]: E0319 09:16:57.825353 4035 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 19 09:16:58.037177 master-0 kubenswrapper[4035]: I0319 09:16:58.037110 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:16:58.038079 master-0 kubenswrapper[4035]: I0319 09:16:58.038048 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:16:58.038079 master-0 kubenswrapper[4035]: I0319 09:16:58.038075 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:16:58.038079 master-0 kubenswrapper[4035]: I0319 09:16:58.038083 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:16:58.038356 master-0 kubenswrapper[4035]: I0319 09:16:58.038122 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:16:58.043353 master-0 kubenswrapper[4035]: E0319 09:16:58.043305 4035 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 19 09:16:58.199460 master-0 kubenswrapper[4035]: I0319 09:16:58.199348 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:59.196749 master-0 kubenswrapper[4035]: I0319 09:16:59.196666 4035 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 19 09:16:59.520897 master-0 kubenswrapper[4035]: I0319 09:16:59.520840 4035 csr.go:257] certificate signing request csr-lmdx5 is issued Mar 19 09:17:00.057836 master-0 kubenswrapper[4035]: I0319 09:17:00.057789 4035 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 19 09:17:00.202435 master-0 kubenswrapper[4035]: I0319 09:17:00.201556 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.221255 master-0 kubenswrapper[4035]: I0319 09:17:00.221207 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.275893 master-0 kubenswrapper[4035]: I0319 09:17:00.275848 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.522241 master-0 kubenswrapper[4035]: I0319 09:17:00.522097 4035 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-20 09:09:07 +0000 UTC, rotation deadline is 2026-03-20 06:13:34.044877274 +0000 UTC Mar 19 09:17:00.522241 master-0 kubenswrapper[4035]: I0319 09:17:00.522160 4035 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h56m33.522722413s for next certificate rotation Mar 19 09:17:00.539763 master-0 kubenswrapper[4035]: I0319 09:17:00.539717 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.539763 master-0 kubenswrapper[4035]: E0319 09:17:00.539753 4035 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 19 09:17:00.559906 master-0 kubenswrapper[4035]: I0319 09:17:00.559853 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.574477 master-0 kubenswrapper[4035]: I0319 09:17:00.574432 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.631368 master-0 kubenswrapper[4035]: I0319 09:17:00.631301 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.910432 master-0 kubenswrapper[4035]: I0319 09:17:00.910352 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:00.910432 master-0 kubenswrapper[4035]: E0319 09:17:00.910393 4035 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 19 09:17:01.006108 master-0 kubenswrapper[4035]: I0319 09:17:01.006053 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:01.021006 master-0 kubenswrapper[4035]: I0319 09:17:01.020963 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:01.077910 master-0 kubenswrapper[4035]: I0319 09:17:01.077854 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:01.299913 master-0 kubenswrapper[4035]: E0319 09:17:01.299818 4035 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 19 09:17:01.346106 master-0 kubenswrapper[4035]: I0319 09:17:01.346027 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:01.346106 master-0 kubenswrapper[4035]: E0319 09:17:01.346089 4035 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 19 09:17:01.940841 master-0 kubenswrapper[4035]: I0319 09:17:01.940782 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:01.954941 master-0 kubenswrapper[4035]: I0319 09:17:01.954860 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:02.009372 master-0 kubenswrapper[4035]: I0319 09:17:02.009317 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:02.284280 master-0 kubenswrapper[4035]: I0319 09:17:02.284121 4035 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 19 09:17:02.284280 master-0 kubenswrapper[4035]: E0319 09:17:02.284176 4035 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 19 09:17:03.333670 master-0 kubenswrapper[4035]: I0319 09:17:03.333586 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:17:03.335352 master-0 kubenswrapper[4035]: I0319 09:17:03.335298 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:17:03.335410 master-0 kubenswrapper[4035]: I0319 09:17:03.335372 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:17:03.335453 master-0 kubenswrapper[4035]: I0319 09:17:03.335418 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:17:03.335968 master-0 kubenswrapper[4035]: I0319 09:17:03.335921 4035 scope.go:117] "RemoveContainer" containerID="c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644" Mar 19 09:17:03.336229 master-0 kubenswrapper[4035]: E0319 09:17:03.336177 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 19 09:17:04.376465 master-0 kubenswrapper[4035]: I0319 09:17:04.376393 4035 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 09:17:04.830366 master-0 kubenswrapper[4035]: E0319 09:17:04.830156 4035 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 19 09:17:05.044158 master-0 kubenswrapper[4035]: I0319 09:17:05.044051 4035 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:17:05.045713 master-0 kubenswrapper[4035]: I0319 09:17:05.045655 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:17:05.045913 master-0 kubenswrapper[4035]: I0319 09:17:05.045751 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:17:05.045913 master-0 kubenswrapper[4035]: I0319 09:17:05.045775 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:17:05.045913 master-0 kubenswrapper[4035]: I0319 09:17:05.045830 4035 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:17:05.061771 master-0 kubenswrapper[4035]: I0319 09:17:05.061703 4035 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 19 09:17:05.061771 master-0 kubenswrapper[4035]: E0319 09:17:05.061760 4035 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 19 09:17:05.073410 master-0 kubenswrapper[4035]: E0319 09:17:05.073344 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.174453 master-0 kubenswrapper[4035]: E0319 09:17:05.174346 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.210087 master-0 kubenswrapper[4035]: I0319 09:17:05.209966 4035 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 19 09:17:05.224534 master-0 kubenswrapper[4035]: I0319 09:17:05.224442 4035 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 19 09:17:05.274845 master-0 kubenswrapper[4035]: E0319 09:17:05.274768 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.375967 master-0 kubenswrapper[4035]: E0319 09:17:05.375877 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.476227 master-0 kubenswrapper[4035]: E0319 09:17:05.476013 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.576859 master-0 kubenswrapper[4035]: E0319 09:17:05.576764 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.584133 master-0 kubenswrapper[4035]: I0319 09:17:05.584064 4035 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 09:17:05.677465 master-0 kubenswrapper[4035]: E0319 09:17:05.677143 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.778447 master-0 kubenswrapper[4035]: E0319 09:17:05.778200 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.878840 master-0 kubenswrapper[4035]: E0319 09:17:05.878732 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:05.979747 master-0 kubenswrapper[4035]: E0319 09:17:05.979638 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:06.080425 master-0 kubenswrapper[4035]: E0319 09:17:06.080220 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:06.181502 master-0 kubenswrapper[4035]: E0319 09:17:06.181373 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:06.282469 master-0 kubenswrapper[4035]: E0319 09:17:06.282389 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:06.383665 master-0 kubenswrapper[4035]: E0319 09:17:06.383536 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:06.484406 master-0 kubenswrapper[4035]: E0319 09:17:06.484347 4035 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:17:06.504094 master-0 kubenswrapper[4035]: I0319 09:17:06.503908 4035 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 09:17:07.188025 master-0 kubenswrapper[4035]: I0319 09:17:07.187963 4035 apiserver.go:52] "Watching apiserver" Mar 19 09:17:07.194254 master-0 kubenswrapper[4035]: I0319 09:17:07.194015 4035 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 09:17:07.194254 master-0 kubenswrapper[4035]: I0319 09:17:07.194166 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-kw4xv","openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d"] Mar 19 09:17:07.194855 master-0 kubenswrapper[4035]: I0319 09:17:07.194417 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.194855 master-0 kubenswrapper[4035]: I0319 09:17:07.194486 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.195970 master-0 kubenswrapper[4035]: I0319 09:17:07.195917 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 09:17:07.197048 master-0 kubenswrapper[4035]: I0319 09:17:07.196953 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 09:17:07.197048 master-0 kubenswrapper[4035]: I0319 09:17:07.197005 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 09:17:07.197507 master-0 kubenswrapper[4035]: I0319 09:17:07.197378 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 19 09:17:07.197507 master-0 kubenswrapper[4035]: I0319 09:17:07.197469 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 19 09:17:07.197763 master-0 kubenswrapper[4035]: I0319 09:17:07.197739 4035 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 19 09:17:07.198018 master-0 kubenswrapper[4035]: I0319 09:17:07.197968 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 19 09:17:07.293185 master-0 kubenswrapper[4035]: I0319 09:17:07.293076 4035 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 19 09:17:07.358856 master-0 kubenswrapper[4035]: I0319 09:17:07.358748 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5sbl\" (UniqueName: \"kubernetes.io/projected/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-kube-api-access-k5sbl\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.358856 master-0 kubenswrapper[4035]: I0319 09:17:07.358818 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.358856 master-0 kubenswrapper[4035]: I0319 09:17:07.358855 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.359207 master-0 kubenswrapper[4035]: I0319 09:17:07.358950 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-resolv-conf\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.359207 master-0 kubenswrapper[4035]: I0319 09:17:07.359066 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.359207 master-0 kubenswrapper[4035]: I0319 09:17:07.359135 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-ca-bundle\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.359207 master-0 kubenswrapper[4035]: I0319 09:17:07.359176 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-var-run-resolv-conf\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.359514 master-0 kubenswrapper[4035]: I0319 09:17:07.359252 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.359514 master-0 kubenswrapper[4035]: I0319 09:17:07.359294 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.359514 master-0 kubenswrapper[4035]: I0319 09:17:07.359325 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-sno-bootstrap-files\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.459945 master-0 kubenswrapper[4035]: I0319 09:17:07.459789 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-ca-bundle\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.459945 master-0 kubenswrapper[4035]: I0319 09:17:07.459855 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-var-run-resolv-conf\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.460199 master-0 kubenswrapper[4035]: I0319 09:17:07.459996 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-ca-bundle\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.460199 master-0 kubenswrapper[4035]: I0319 09:17:07.460088 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.460199 master-0 kubenswrapper[4035]: I0319 09:17:07.460142 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-var-run-resolv-conf\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.460199 master-0 kubenswrapper[4035]: I0319 09:17:07.460161 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.460199 master-0 kubenswrapper[4035]: I0319 09:17:07.460178 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.460199 master-0 kubenswrapper[4035]: I0319 09:17:07.460200 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.460759 master-0 kubenswrapper[4035]: I0319 09:17:07.460233 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-sno-bootstrap-files\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.460759 master-0 kubenswrapper[4035]: I0319 09:17:07.460283 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-resolv-conf\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.460759 master-0 kubenswrapper[4035]: I0319 09:17:07.460318 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5sbl\" (UniqueName: \"kubernetes.io/projected/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-kube-api-access-k5sbl\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.460759 master-0 kubenswrapper[4035]: I0319 09:17:07.460327 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.460759 master-0 kubenswrapper[4035]: I0319 09:17:07.460657 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-resolv-conf\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.461044 master-0 kubenswrapper[4035]: I0319 09:17:07.460868 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-sno-bootstrap-files\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.461044 master-0 kubenswrapper[4035]: I0319 09:17:07.460938 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.461044 master-0 kubenswrapper[4035]: I0319 09:17:07.460986 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.461222 master-0 kubenswrapper[4035]: E0319 09:17:07.461152 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:07.461284 master-0 kubenswrapper[4035]: E0319 09:17:07.461251 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:07.961222205 +0000 UTC m=+37.419837176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:07.462969 master-0 kubenswrapper[4035]: I0319 09:17:07.462931 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.494599 master-0 kubenswrapper[4035]: I0319 09:17:07.494527 4035 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 19 09:17:07.502008 master-0 kubenswrapper[4035]: I0319 09:17:07.501971 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5sbl\" (UniqueName: \"kubernetes.io/projected/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-kube-api-access-k5sbl\") pod \"assisted-installer-controller-kw4xv\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.505388 master-0 kubenswrapper[4035]: I0319 09:17:07.505325 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.532963 master-0 kubenswrapper[4035]: I0319 09:17:07.532899 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:07.546866 master-0 kubenswrapper[4035]: W0319 09:17:07.546812 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47ea8c5d_8db7_44a5_bdfd_e12d3ac1d26c.slice/crio-bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650 WatchSource:0}: Error finding container bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650: Status 404 returned error can't find the container with id bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650 Mar 19 09:17:07.714573 master-0 kubenswrapper[4035]: I0319 09:17:07.714377 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7bd846bfc4-gkvf5"] Mar 19 09:17:07.714822 master-0 kubenswrapper[4035]: I0319 09:17:07.714755 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.718148 master-0 kubenswrapper[4035]: I0319 09:17:07.718037 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 09:17:07.718418 master-0 kubenswrapper[4035]: I0319 09:17:07.718261 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 09:17:07.718418 master-0 kubenswrapper[4035]: I0319 09:17:07.718039 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 09:17:07.863338 master-0 kubenswrapper[4035]: I0319 09:17:07.863277 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvtc\" (UniqueName: \"kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.863338 master-0 kubenswrapper[4035]: I0319 09:17:07.863324 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.863338 master-0 kubenswrapper[4035]: I0319 09:17:07.863347 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.964320 master-0 kubenswrapper[4035]: I0319 09:17:07.964227 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:07.964320 master-0 kubenswrapper[4035]: I0319 09:17:07.964303 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvtc\" (UniqueName: \"kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.964691 master-0 kubenswrapper[4035]: I0319 09:17:07.964345 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.964691 master-0 kubenswrapper[4035]: I0319 09:17:07.964377 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.964691 master-0 kubenswrapper[4035]: I0319 09:17:07.964476 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.964691 master-0 kubenswrapper[4035]: E0319 09:17:07.964662 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:07.964978 master-0 kubenswrapper[4035]: E0319 09:17:07.964754 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:08.964727217 +0000 UTC m=+38.423342198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:07.970749 master-0 kubenswrapper[4035]: I0319 09:17:07.970710 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:07.989350 master-0 kubenswrapper[4035]: I0319 09:17:07.989293 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvtc\" (UniqueName: \"kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:08.032591 master-0 kubenswrapper[4035]: I0319 09:17:08.032494 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:17:08.047034 master-0 kubenswrapper[4035]: W0319 09:17:08.046952 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc4a026a_8f3d_4dbb_a6bc_793d5fdca46c.slice/crio-cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467 WatchSource:0}: Error finding container cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467: Status 404 returned error can't find the container with id cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467 Mar 19 09:17:08.487467 master-0 kubenswrapper[4035]: I0319 09:17:08.487302 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" event={"ID":"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c","Type":"ContainerStarted","Data":"cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467"} Mar 19 09:17:08.492210 master-0 kubenswrapper[4035]: I0319 09:17:08.492151 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kw4xv" event={"ID":"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c","Type":"ContainerStarted","Data":"bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650"} Mar 19 09:17:08.972810 master-0 kubenswrapper[4035]: I0319 09:17:08.972743 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:08.973320 master-0 kubenswrapper[4035]: E0319 09:17:08.972874 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:08.973320 master-0 kubenswrapper[4035]: E0319 09:17:08.972938 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:10.972919366 +0000 UTC m=+40.431534307 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:10.288234 master-0 kubenswrapper[4035]: I0319 09:17:10.288190 4035 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 09:17:10.636028 master-0 kubenswrapper[4035]: I0319 09:17:10.635996 4035 csr.go:261] certificate signing request csr-wdjmj is approved, waiting to be issued Mar 19 09:17:10.645493 master-0 kubenswrapper[4035]: I0319 09:17:10.645455 4035 csr.go:257] certificate signing request csr-wdjmj is issued Mar 19 09:17:10.987329 master-0 kubenswrapper[4035]: I0319 09:17:10.987131 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:10.987329 master-0 kubenswrapper[4035]: E0319 09:17:10.987269 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:10.987496 master-0 kubenswrapper[4035]: E0319 09:17:10.987336 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:14.987315997 +0000 UTC m=+44.445930938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:11.647122 master-0 kubenswrapper[4035]: I0319 09:17:11.647047 4035 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 09:09:07 +0000 UTC, rotation deadline is 2026-03-20 03:54:30.599707819 +0000 UTC Mar 19 09:17:11.647122 master-0 kubenswrapper[4035]: I0319 09:17:11.647094 4035 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h37m18.952617686s for next certificate rotation Mar 19 09:17:12.648307 master-0 kubenswrapper[4035]: I0319 09:17:12.648236 4035 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 09:09:07 +0000 UTC, rotation deadline is 2026-03-20 04:13:09.060676571 +0000 UTC Mar 19 09:17:12.648307 master-0 kubenswrapper[4035]: I0319 09:17:12.648281 4035 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h55m56.412399927s for next certificate rotation Mar 19 09:17:13.505081 master-0 kubenswrapper[4035]: I0319 09:17:13.504654 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" event={"ID":"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c","Type":"ContainerStarted","Data":"794857861e41452767f8150da770c0fdb6415a1b4c58da2ca5c6bb1b5694eb77"} Mar 19 09:17:13.507169 master-0 kubenswrapper[4035]: I0319 09:17:13.507091 4035 generic.go:334] "Generic (PLEG): container finished" podID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerID="4f7ae82c42fcdc2525bbc875f58985f627c3385f9956bdf7d697087dac6e3a2f" exitCode=0 Mar 19 09:17:13.507169 master-0 kubenswrapper[4035]: I0319 09:17:13.507163 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kw4xv" event={"ID":"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c","Type":"ContainerDied","Data":"4f7ae82c42fcdc2525bbc875f58985f627c3385f9956bdf7d697087dac6e3a2f"} Mar 19 09:17:13.543619 master-0 kubenswrapper[4035]: I0319 09:17:13.543477 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" podStartSLOduration=2.0190125820000002 podStartE2EDuration="6.543450697s" podCreationTimestamp="2026-03-19 09:17:07 +0000 UTC" firstStartedPulling="2026-03-19 09:17:08.049693676 +0000 UTC m=+37.508308627" lastFinishedPulling="2026-03-19 09:17:12.574131801 +0000 UTC m=+42.032746742" observedRunningTime="2026-03-19 09:17:13.524309679 +0000 UTC m=+42.982924620" watchObservedRunningTime="2026-03-19 09:17:13.543450697 +0000 UTC m=+43.002065658" Mar 19 09:17:14.523403 master-0 kubenswrapper[4035]: I0319 09:17:14.523377 4035 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641113 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-sno-bootstrap-files\") pod \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641170 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5sbl\" (UniqueName: \"kubernetes.io/projected/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-kube-api-access-k5sbl\") pod \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641183 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-var-run-resolv-conf\") pod \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641202 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-ca-bundle\") pod \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641223 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-resolv-conf\") pod \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\" (UID: \"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c\") " Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641293 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" (UID: "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641378 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" (UID: "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641409 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" (UID: "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:17:14.641778 master-0 kubenswrapper[4035]: I0319 09:17:14.641440 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" (UID: "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:17:14.646531 master-0 kubenswrapper[4035]: I0319 09:17:14.646482 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-kube-api-access-k5sbl" (OuterVolumeSpecName: "kube-api-access-k5sbl") pod "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" (UID: "47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c"). InnerVolumeSpecName "kube-api-access-k5sbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:17:14.742578 master-0 kubenswrapper[4035]: I0319 09:17:14.742477 4035 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 19 09:17:14.742578 master-0 kubenswrapper[4035]: I0319 09:17:14.742516 4035 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5sbl\" (UniqueName: \"kubernetes.io/projected/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-kube-api-access-k5sbl\") on node \"master-0\" DevicePath \"\"" Mar 19 09:17:14.742578 master-0 kubenswrapper[4035]: I0319 09:17:14.742530 4035 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 19 09:17:14.742578 master-0 kubenswrapper[4035]: I0319 09:17:14.742560 4035 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:17:14.742578 master-0 kubenswrapper[4035]: I0319 09:17:14.742574 4035 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 19 09:17:15.043486 master-0 kubenswrapper[4035]: I0319 09:17:15.043430 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:15.043692 master-0 kubenswrapper[4035]: E0319 09:17:15.043606 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:15.043692 master-0 kubenswrapper[4035]: E0319 09:17:15.043678 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:23.043659495 +0000 UTC m=+52.502274436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:15.512469 master-0 kubenswrapper[4035]: I0319 09:17:15.512385 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-kw4xv" event={"ID":"47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c","Type":"ContainerDied","Data":"bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650"} Mar 19 09:17:15.512469 master-0 kubenswrapper[4035]: I0319 09:17:15.512442 4035 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650" Mar 19 09:17:15.512469 master-0 kubenswrapper[4035]: I0319 09:17:15.512446 4035 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:17:15.743184 master-0 kubenswrapper[4035]: I0319 09:17:15.742571 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-d7s44"] Mar 19 09:17:15.743184 master-0 kubenswrapper[4035]: E0319 09:17:15.742657 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:17:15.743184 master-0 kubenswrapper[4035]: I0319 09:17:15.742672 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:17:15.743184 master-0 kubenswrapper[4035]: I0319 09:17:15.742769 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:17:15.743184 master-0 kubenswrapper[4035]: I0319 09:17:15.742953 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-d7s44" Mar 19 09:17:15.748716 master-0 kubenswrapper[4035]: I0319 09:17:15.748660 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9tpl\" (UniqueName: \"kubernetes.io/projected/307996ba-f4bd-4504-bf14-2d5a7a101016-kube-api-access-f9tpl\") pod \"mtu-prober-d7s44\" (UID: \"307996ba-f4bd-4504-bf14-2d5a7a101016\") " pod="openshift-network-operator/mtu-prober-d7s44" Mar 19 09:17:15.849822 master-0 kubenswrapper[4035]: I0319 09:17:15.849757 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9tpl\" (UniqueName: \"kubernetes.io/projected/307996ba-f4bd-4504-bf14-2d5a7a101016-kube-api-access-f9tpl\") pod \"mtu-prober-d7s44\" (UID: \"307996ba-f4bd-4504-bf14-2d5a7a101016\") " pod="openshift-network-operator/mtu-prober-d7s44" Mar 19 09:17:15.865314 master-0 kubenswrapper[4035]: I0319 09:17:15.865238 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9tpl\" (UniqueName: \"kubernetes.io/projected/307996ba-f4bd-4504-bf14-2d5a7a101016-kube-api-access-f9tpl\") pod \"mtu-prober-d7s44\" (UID: \"307996ba-f4bd-4504-bf14-2d5a7a101016\") " pod="openshift-network-operator/mtu-prober-d7s44" Mar 19 09:17:16.057801 master-0 kubenswrapper[4035]: I0319 09:17:16.057637 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-d7s44" Mar 19 09:17:16.350842 master-0 kubenswrapper[4035]: I0319 09:17:16.350804 4035 scope.go:117] "RemoveContainer" containerID="c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644" Mar 19 09:17:16.351073 master-0 kubenswrapper[4035]: I0319 09:17:16.350838 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 19 09:17:16.516349 master-0 kubenswrapper[4035]: I0319 09:17:16.516263 4035 generic.go:334] "Generic (PLEG): container finished" podID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerID="e2912f5a07027e593c03c831722de1c74b974cbf7fe0986009830ada22289435" exitCode=0 Mar 19 09:17:16.516349 master-0 kubenswrapper[4035]: I0319 09:17:16.516309 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-d7s44" event={"ID":"307996ba-f4bd-4504-bf14-2d5a7a101016","Type":"ContainerDied","Data":"e2912f5a07027e593c03c831722de1c74b974cbf7fe0986009830ada22289435"} Mar 19 09:17:16.516349 master-0 kubenswrapper[4035]: I0319 09:17:16.516369 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-d7s44" event={"ID":"307996ba-f4bd-4504-bf14-2d5a7a101016","Type":"ContainerStarted","Data":"8a9fbbe858fe11080717c6b2043df87705b18427f15663b4d039cae1dd0e63eb"} Mar 19 09:17:17.520886 master-0 kubenswrapper[4035]: I0319 09:17:17.520846 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 09:17:17.521954 master-0 kubenswrapper[4035]: I0319 09:17:17.521580 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"9df04d5fbcf74c680b5a31ee14b15b95259c81da87f4ed60f22768d81cdac068"} Mar 19 09:17:17.538881 master-0 kubenswrapper[4035]: I0319 09:17:17.538785 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.538755877 podStartE2EDuration="1.538755877s" podCreationTimestamp="2026-03-19 09:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:17:17.538187652 +0000 UTC m=+46.996802593" watchObservedRunningTime="2026-03-19 09:17:17.538755877 +0000 UTC m=+46.997370858" Mar 19 09:17:17.546089 master-0 kubenswrapper[4035]: I0319 09:17:17.546027 4035 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-d7s44" Mar 19 09:17:17.659927 master-0 kubenswrapper[4035]: I0319 09:17:17.659847 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9tpl\" (UniqueName: \"kubernetes.io/projected/307996ba-f4bd-4504-bf14-2d5a7a101016-kube-api-access-f9tpl\") pod \"307996ba-f4bd-4504-bf14-2d5a7a101016\" (UID: \"307996ba-f4bd-4504-bf14-2d5a7a101016\") " Mar 19 09:17:17.662866 master-0 kubenswrapper[4035]: I0319 09:17:17.662823 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/307996ba-f4bd-4504-bf14-2d5a7a101016-kube-api-access-f9tpl" (OuterVolumeSpecName: "kube-api-access-f9tpl") pod "307996ba-f4bd-4504-bf14-2d5a7a101016" (UID: "307996ba-f4bd-4504-bf14-2d5a7a101016"). InnerVolumeSpecName "kube-api-access-f9tpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:17:17.761186 master-0 kubenswrapper[4035]: I0319 09:17:17.761100 4035 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9tpl\" (UniqueName: \"kubernetes.io/projected/307996ba-f4bd-4504-bf14-2d5a7a101016-kube-api-access-f9tpl\") on node \"master-0\" DevicePath \"\"" Mar 19 09:17:18.527450 master-0 kubenswrapper[4035]: I0319 09:17:18.527361 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-d7s44" event={"ID":"307996ba-f4bd-4504-bf14-2d5a7a101016","Type":"ContainerDied","Data":"8a9fbbe858fe11080717c6b2043df87705b18427f15663b4d039cae1dd0e63eb"} Mar 19 09:17:18.527450 master-0 kubenswrapper[4035]: I0319 09:17:18.527441 4035 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9fbbe858fe11080717c6b2043df87705b18427f15663b4d039cae1dd0e63eb" Mar 19 09:17:18.527450 master-0 kubenswrapper[4035]: I0319 09:17:18.527380 4035 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-d7s44" Mar 19 09:17:20.758015 master-0 kubenswrapper[4035]: I0319 09:17:20.757966 4035 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-d7s44"] Mar 19 09:17:20.763923 master-0 kubenswrapper[4035]: I0319 09:17:20.763754 4035 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-d7s44"] Mar 19 09:17:21.342470 master-0 kubenswrapper[4035]: I0319 09:17:21.342416 4035 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="307996ba-f4bd-4504-bf14-2d5a7a101016" path="/var/lib/kubelet/pods/307996ba-f4bd-4504-bf14-2d5a7a101016/volumes" Mar 19 09:17:23.100065 master-0 kubenswrapper[4035]: I0319 09:17:23.100002 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:23.100722 master-0 kubenswrapper[4035]: E0319 09:17:23.100133 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:23.100722 master-0 kubenswrapper[4035]: E0319 09:17:23.100201 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:39.100182081 +0000 UTC m=+68.558797022 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:25.626980 master-0 kubenswrapper[4035]: I0319 09:17:25.626934 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8pt59"] Mar 19 09:17:25.627399 master-0 kubenswrapper[4035]: E0319 09:17:25.627022 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerName="prober" Mar 19 09:17:25.627399 master-0 kubenswrapper[4035]: I0319 09:17:25.627037 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerName="prober" Mar 19 09:17:25.627399 master-0 kubenswrapper[4035]: I0319 09:17:25.627062 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerName="prober" Mar 19 09:17:25.627399 master-0 kubenswrapper[4035]: I0319 09:17:25.627241 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.629130 master-0 kubenswrapper[4035]: I0319 09:17:25.629097 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 09:17:25.629363 master-0 kubenswrapper[4035]: I0319 09:17:25.629340 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 09:17:25.629447 master-0 kubenswrapper[4035]: I0319 09:17:25.629420 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 09:17:25.629524 master-0 kubenswrapper[4035]: I0319 09:17:25.629475 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 09:17:25.718783 master-0 kubenswrapper[4035]: I0319 09:17:25.718744 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719048 master-0 kubenswrapper[4035]: I0319 09:17:25.719023 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719152 master-0 kubenswrapper[4035]: I0319 09:17:25.719134 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719267 master-0 kubenswrapper[4035]: I0319 09:17:25.719249 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719372 master-0 kubenswrapper[4035]: I0319 09:17:25.719356 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w5fk\" (UniqueName: \"kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719491 master-0 kubenswrapper[4035]: I0319 09:17:25.719475 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719627 master-0 kubenswrapper[4035]: I0319 09:17:25.719610 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719735 master-0 kubenswrapper[4035]: I0319 09:17:25.719705 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719842 master-0 kubenswrapper[4035]: I0319 09:17:25.719826 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.719948 master-0 kubenswrapper[4035]: I0319 09:17:25.719931 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.720047 master-0 kubenswrapper[4035]: I0319 09:17:25.720031 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.720144 master-0 kubenswrapper[4035]: I0319 09:17:25.720128 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.720241 master-0 kubenswrapper[4035]: I0319 09:17:25.720224 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.720343 master-0 kubenswrapper[4035]: I0319 09:17:25.720327 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.720442 master-0 kubenswrapper[4035]: I0319 09:17:25.720426 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.720579 master-0 kubenswrapper[4035]: I0319 09:17:25.720532 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.720641 master-0 kubenswrapper[4035]: I0319 09:17:25.720588 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821699 master-0 kubenswrapper[4035]: I0319 09:17:25.821649 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821699 master-0 kubenswrapper[4035]: I0319 09:17:25.821698 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821996 master-0 kubenswrapper[4035]: I0319 09:17:25.821780 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821996 master-0 kubenswrapper[4035]: I0319 09:17:25.821838 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821996 master-0 kubenswrapper[4035]: I0319 09:17:25.821848 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821996 master-0 kubenswrapper[4035]: I0319 09:17:25.821899 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821996 master-0 kubenswrapper[4035]: I0319 09:17:25.821965 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.821996 master-0 kubenswrapper[4035]: I0319 09:17:25.821989 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822151 master-0 kubenswrapper[4035]: I0319 09:17:25.822011 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822151 master-0 kubenswrapper[4035]: I0319 09:17:25.822026 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822151 master-0 kubenswrapper[4035]: I0319 09:17:25.822041 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822245 master-0 kubenswrapper[4035]: I0319 09:17:25.822198 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822283 master-0 kubenswrapper[4035]: I0319 09:17:25.822267 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822315 master-0 kubenswrapper[4035]: I0319 09:17:25.822297 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822342 master-0 kubenswrapper[4035]: I0319 09:17:25.822316 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822342 master-0 kubenswrapper[4035]: I0319 09:17:25.822319 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822395 master-0 kubenswrapper[4035]: I0319 09:17:25.822365 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822395 master-0 kubenswrapper[4035]: I0319 09:17:25.822369 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822395 master-0 kubenswrapper[4035]: I0319 09:17:25.822382 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822477 master-0 kubenswrapper[4035]: I0319 09:17:25.822444 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822506 master-0 kubenswrapper[4035]: I0319 09:17:25.822493 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822560 master-0 kubenswrapper[4035]: I0319 09:17:25.822516 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822560 master-0 kubenswrapper[4035]: I0319 09:17:25.822486 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822621 master-0 kubenswrapper[4035]: I0319 09:17:25.822566 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822621 master-0 kubenswrapper[4035]: I0319 09:17:25.822582 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5fk\" (UniqueName: \"kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822621 master-0 kubenswrapper[4035]: I0319 09:17:25.822582 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822621 master-0 kubenswrapper[4035]: I0319 09:17:25.822602 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822621 master-0 kubenswrapper[4035]: I0319 09:17:25.822603 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822621 master-0 kubenswrapper[4035]: I0319 09:17:25.822617 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822791 master-0 kubenswrapper[4035]: I0319 09:17:25.822617 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.822791 master-0 kubenswrapper[4035]: I0319 09:17:25.822686 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.823254 master-0 kubenswrapper[4035]: I0319 09:17:25.823224 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.823676 master-0 kubenswrapper[4035]: I0319 09:17:25.823654 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.827478 master-0 kubenswrapper[4035]: I0319 09:17:25.827454 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jzj4h"] Mar 19 09:17:25.828158 master-0 kubenswrapper[4035]: I0319 09:17:25.828142 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.831705 master-0 kubenswrapper[4035]: I0319 09:17:25.831672 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 09:17:25.833177 master-0 kubenswrapper[4035]: I0319 09:17:25.833155 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 09:17:25.843795 master-0 kubenswrapper[4035]: I0319 09:17:25.843687 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5fk\" (UniqueName: \"kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.923431 master-0 kubenswrapper[4035]: I0319 09:17:25.923317 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.923431 master-0 kubenswrapper[4035]: I0319 09:17:25.923369 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.923431 master-0 kubenswrapper[4035]: I0319 09:17:25.923399 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.923685 master-0 kubenswrapper[4035]: I0319 09:17:25.923449 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.923685 master-0 kubenswrapper[4035]: I0319 09:17:25.923472 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.923685 master-0 kubenswrapper[4035]: I0319 09:17:25.923498 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmdk\" (UniqueName: \"kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.923685 master-0 kubenswrapper[4035]: I0319 09:17:25.923519 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.923685 master-0 kubenswrapper[4035]: I0319 09:17:25.923581 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:25.940388 master-0 kubenswrapper[4035]: I0319 09:17:25.940343 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8pt59" Mar 19 09:17:25.950629 master-0 kubenswrapper[4035]: W0319 09:17:25.950578 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09cc190d_5647_40a1_bfe9_5355bcb33b10.slice/crio-0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65 WatchSource:0}: Error finding container 0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65: Status 404 returned error can't find the container with id 0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65 Mar 19 09:17:26.024500 master-0 kubenswrapper[4035]: I0319 09:17:26.024433 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.024686 master-0 kubenswrapper[4035]: I0319 09:17:26.024520 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.024686 master-0 kubenswrapper[4035]: I0319 09:17:26.024623 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.024686 master-0 kubenswrapper[4035]: I0319 09:17:26.024682 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.024881 master-0 kubenswrapper[4035]: I0319 09:17:26.024839 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.024921 master-0 kubenswrapper[4035]: I0319 09:17:26.024885 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.024921 master-0 kubenswrapper[4035]: I0319 09:17:26.024895 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.024975 master-0 kubenswrapper[4035]: I0319 09:17:26.024946 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.025051 master-0 kubenswrapper[4035]: I0319 09:17:26.025010 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmdk\" (UniqueName: \"kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.025099 master-0 kubenswrapper[4035]: I0319 09:17:26.025072 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.025248 master-0 kubenswrapper[4035]: I0319 09:17:26.025197 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.025299 master-0 kubenswrapper[4035]: I0319 09:17:26.025216 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.025503 master-0 kubenswrapper[4035]: I0319 09:17:26.025460 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.025749 master-0 kubenswrapper[4035]: I0319 09:17:26.025723 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.026199 master-0 kubenswrapper[4035]: I0319 09:17:26.026169 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.039654 master-0 kubenswrapper[4035]: I0319 09:17:26.039616 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmdk\" (UniqueName: \"kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.140734 master-0 kubenswrapper[4035]: I0319 09:17:26.140671 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:17:26.150727 master-0 kubenswrapper[4035]: W0319 09:17:26.150684 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60683578_6673_4aff_b1d5_3167d534ac08.slice/crio-033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1 WatchSource:0}: Error finding container 033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1: Status 404 returned error can't find the container with id 033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1 Mar 19 09:17:26.546799 master-0 kubenswrapper[4035]: I0319 09:17:26.546671 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerStarted","Data":"033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1"} Mar 19 09:17:26.548260 master-0 kubenswrapper[4035]: I0319 09:17:26.548185 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8pt59" event={"ID":"09cc190d-5647-40a1-bfe9-5355bcb33b10","Type":"ContainerStarted","Data":"0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65"} Mar 19 09:17:26.627829 master-0 kubenswrapper[4035]: I0319 09:17:26.627784 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lflg7"] Mar 19 09:17:26.628515 master-0 kubenswrapper[4035]: I0319 09:17:26.628133 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:26.628515 master-0 kubenswrapper[4035]: E0319 09:17:26.628195 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:26.731418 master-0 kubenswrapper[4035]: I0319 09:17:26.730887 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:26.731418 master-0 kubenswrapper[4035]: I0319 09:17:26.731341 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt99t\" (UniqueName: \"kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:26.832099 master-0 kubenswrapper[4035]: I0319 09:17:26.832003 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:26.832099 master-0 kubenswrapper[4035]: I0319 09:17:26.832039 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt99t\" (UniqueName: \"kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:26.832269 master-0 kubenswrapper[4035]: E0319 09:17:26.832159 4035 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:26.832269 master-0 kubenswrapper[4035]: E0319 09:17:26.832207 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:27.332193642 +0000 UTC m=+56.790808583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:26.848815 master-0 kubenswrapper[4035]: I0319 09:17:26.848772 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt99t\" (UniqueName: \"kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:27.335466 master-0 kubenswrapper[4035]: I0319 09:17:27.335426 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:27.335680 master-0 kubenswrapper[4035]: E0319 09:17:27.335575 4035 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:27.335680 master-0 kubenswrapper[4035]: E0319 09:17:27.335639 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:28.335621593 +0000 UTC m=+57.794236534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:28.333556 master-0 kubenswrapper[4035]: I0319 09:17:28.333489 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:28.334030 master-0 kubenswrapper[4035]: E0319 09:17:28.333606 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:28.345796 master-0 kubenswrapper[4035]: I0319 09:17:28.344796 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:28.345796 master-0 kubenswrapper[4035]: E0319 09:17:28.344945 4035 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:28.345796 master-0 kubenswrapper[4035]: E0319 09:17:28.345007 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:30.344983204 +0000 UTC m=+59.803598145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:28.553079 master-0 kubenswrapper[4035]: I0319 09:17:28.552981 4035 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="be5668fe1c571dde1e396c091e4c7ec37d88531f9ac3613886b71274efe031c6" exitCode=0 Mar 19 09:17:28.553079 master-0 kubenswrapper[4035]: I0319 09:17:28.553021 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerDied","Data":"be5668fe1c571dde1e396c091e4c7ec37d88531f9ac3613886b71274efe031c6"} Mar 19 09:17:30.926192 master-0 kubenswrapper[4035]: I0319 09:17:30.926099 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:30.927110 master-0 kubenswrapper[4035]: E0319 09:17:30.926267 4035 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:30.927110 master-0 kubenswrapper[4035]: E0319 09:17:30.926327 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:34.926307351 +0000 UTC m=+64.384922292 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:30.927110 master-0 kubenswrapper[4035]: I0319 09:17:30.926783 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:30.927110 master-0 kubenswrapper[4035]: E0319 09:17:30.926906 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:32.333893 master-0 kubenswrapper[4035]: I0319 09:17:32.333835 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:32.334783 master-0 kubenswrapper[4035]: E0319 09:17:32.333977 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:34.334749 master-0 kubenswrapper[4035]: I0319 09:17:34.334284 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:34.334749 master-0 kubenswrapper[4035]: E0319 09:17:34.334433 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:34.960175 master-0 kubenswrapper[4035]: I0319 09:17:34.960123 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:34.960459 master-0 kubenswrapper[4035]: E0319 09:17:34.960297 4035 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:34.960459 master-0 kubenswrapper[4035]: E0319 09:17:34.960372 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:42.960348131 +0000 UTC m=+72.418963082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:35.943381 master-0 kubenswrapper[4035]: I0319 09:17:35.943270 4035 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="575ffdeb036bb96884333ecfd381cd08c10d745628010252b611aaa18d03bb88" exitCode=0 Mar 19 09:17:35.943381 master-0 kubenswrapper[4035]: I0319 09:17:35.943323 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerDied","Data":"575ffdeb036bb96884333ecfd381cd08c10d745628010252b611aaa18d03bb88"} Mar 19 09:17:36.333892 master-0 kubenswrapper[4035]: I0319 09:17:36.333791 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:36.334059 master-0 kubenswrapper[4035]: E0319 09:17:36.333925 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:38.030516 master-0 kubenswrapper[4035]: I0319 09:17:38.029483 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt"] Mar 19 09:17:38.030516 master-0 kubenswrapper[4035]: I0319 09:17:38.029954 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.033271 master-0 kubenswrapper[4035]: I0319 09:17:38.031727 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 09:17:38.033271 master-0 kubenswrapper[4035]: I0319 09:17:38.032998 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 09:17:38.033271 master-0 kubenswrapper[4035]: I0319 09:17:38.032895 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 09:17:38.035835 master-0 kubenswrapper[4035]: I0319 09:17:38.033376 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 09:17:38.035940 master-0 kubenswrapper[4035]: I0319 09:17:38.033837 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 09:17:38.080942 master-0 kubenswrapper[4035]: I0319 09:17:38.080896 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.081144 master-0 kubenswrapper[4035]: I0319 09:17:38.080974 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbw6q\" (UniqueName: \"kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.081144 master-0 kubenswrapper[4035]: I0319 09:17:38.081004 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.081144 master-0 kubenswrapper[4035]: I0319 09:17:38.081078 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.182111 master-0 kubenswrapper[4035]: I0319 09:17:38.182058 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.182367 master-0 kubenswrapper[4035]: I0319 09:17:38.182316 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbw6q\" (UniqueName: \"kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.182465 master-0 kubenswrapper[4035]: I0319 09:17:38.182429 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.182625 master-0 kubenswrapper[4035]: I0319 09:17:38.182537 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.183213 master-0 kubenswrapper[4035]: I0319 09:17:38.183175 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.183648 master-0 kubenswrapper[4035]: I0319 09:17:38.183594 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.187371 master-0 kubenswrapper[4035]: I0319 09:17:38.187280 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.209651 master-0 kubenswrapper[4035]: I0319 09:17:38.209575 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbw6q\" (UniqueName: \"kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.254441 master-0 kubenswrapper[4035]: I0319 09:17:38.254387 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6q4bt"] Mar 19 09:17:38.255080 master-0 kubenswrapper[4035]: I0319 09:17:38.255051 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.259759 master-0 kubenswrapper[4035]: I0319 09:17:38.258710 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 09:17:38.262590 master-0 kubenswrapper[4035]: I0319 09:17:38.260785 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 09:17:38.282966 master-0 kubenswrapper[4035]: I0319 09:17:38.282867 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-systemd-units\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.282966 master-0 kubenswrapper[4035]: I0319 09:17:38.282911 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.282966 master-0 kubenswrapper[4035]: I0319 09:17:38.282935 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-etc-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.282966 master-0 kubenswrapper[4035]: I0319 09:17:38.282955 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vw8\" (UniqueName: \"kubernetes.io/projected/4da90d03-46bd-41be-9224-0c63b31c535c-kube-api-access-h5vw8\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.282977 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-var-lib-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.282997 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.283015 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-env-overrides\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.283032 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-netns\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.283050 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-netd\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.283122 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-config\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.283141 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-script-lib\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.283210 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-slash\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283316 master-0 kubenswrapper[4035]: I0319 09:17:38.283314 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-kubelet\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283785 master-0 kubenswrapper[4035]: I0319 09:17:38.283348 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-log-socket\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283785 master-0 kubenswrapper[4035]: I0319 09:17:38.283375 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-bin\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283785 master-0 kubenswrapper[4035]: I0319 09:17:38.283408 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4da90d03-46bd-41be-9224-0c63b31c535c-ovn-node-metrics-cert\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283785 master-0 kubenswrapper[4035]: I0319 09:17:38.283439 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-ovn-kubernetes\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283785 master-0 kubenswrapper[4035]: I0319 09:17:38.283467 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-systemd\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283785 master-0 kubenswrapper[4035]: I0319 09:17:38.283492 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-ovn\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.283785 master-0 kubenswrapper[4035]: I0319 09:17:38.283517 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-node-log\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.334052 master-0 kubenswrapper[4035]: I0319 09:17:38.333983 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:38.334245 master-0 kubenswrapper[4035]: E0319 09:17:38.334121 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:38.348845 master-0 kubenswrapper[4035]: I0319 09:17:38.348765 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:17:38.383914 master-0 kubenswrapper[4035]: I0319 09:17:38.383825 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-systemd-units\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.383914 master-0 kubenswrapper[4035]: I0319 09:17:38.383862 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.383914 master-0 kubenswrapper[4035]: I0319 09:17:38.383883 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-etc-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.383914 master-0 kubenswrapper[4035]: I0319 09:17:38.383898 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vw8\" (UniqueName: \"kubernetes.io/projected/4da90d03-46bd-41be-9224-0c63b31c535c-kube-api-access-h5vw8\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.383914 master-0 kubenswrapper[4035]: I0319 09:17:38.383914 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-var-lib-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.383914 master-0 kubenswrapper[4035]: I0319 09:17:38.383929 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.383914 master-0 kubenswrapper[4035]: I0319 09:17:38.383945 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-env-overrides\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.383960 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-netns\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.383975 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-netd\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.383993 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-config\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384006 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-script-lib\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384020 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-slash\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384050 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-kubelet\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384065 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4da90d03-46bd-41be-9224-0c63b31c535c-ovn-node-metrics-cert\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384079 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-log-socket\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384093 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-bin\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384108 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-node-log\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384147 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-ovn-kubernetes\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384176 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-systemd\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384196 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-ovn\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384255 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-ovn\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384296 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-systemd-units\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384323 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.384715 master-0 kubenswrapper[4035]: I0319 09:17:38.384351 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-etc-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.384683 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-var-lib-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.384719 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-openvswitch\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.385476 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-env-overrides\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.385529 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-netns\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.385580 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-netd\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.385831 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-log-socket\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.385971 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-slash\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.386037 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-kubelet\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.386106 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-ovn-kubernetes\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.386132 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-bin\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.386185 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-systemd\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.386460 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-node-log\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.386669 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-config\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.388858 master-0 kubenswrapper[4035]: I0319 09:17:38.388378 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-script-lib\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.390981 master-0 kubenswrapper[4035]: I0319 09:17:38.389519 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4da90d03-46bd-41be-9224-0c63b31c535c-ovn-node-metrics-cert\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.415184 master-0 kubenswrapper[4035]: I0319 09:17:38.415136 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5vw8\" (UniqueName: \"kubernetes.io/projected/4da90d03-46bd-41be-9224-0c63b31c535c-kube-api-access-h5vw8\") pod \"ovnkube-node-6q4bt\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:38.581733 master-0 kubenswrapper[4035]: I0319 09:17:38.581623 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:17:39.137950 master-0 kubenswrapper[4035]: W0319 09:17:39.137884 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4da90d03_46bd_41be_9224_0c63b31c535c.slice/crio-8ee54fa7ada4c624d77b0e2f3dcbdb8c8d02973745fe65c2d05740e42c92ec9e WatchSource:0}: Error finding container 8ee54fa7ada4c624d77b0e2f3dcbdb8c8d02973745fe65c2d05740e42c92ec9e: Status 404 returned error can't find the container with id 8ee54fa7ada4c624d77b0e2f3dcbdb8c8d02973745fe65c2d05740e42c92ec9e Mar 19 09:17:39.138851 master-0 kubenswrapper[4035]: W0319 09:17:39.138803 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1187ddcd_3b78_4b3f_9b12_06ce76cb6040.slice/crio-3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814 WatchSource:0}: Error finding container 3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814: Status 404 returned error can't find the container with id 3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814 Mar 19 09:17:39.188676 master-0 kubenswrapper[4035]: I0319 09:17:39.188628 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:17:39.188896 master-0 kubenswrapper[4035]: E0319 09:17:39.188831 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:39.188959 master-0 kubenswrapper[4035]: E0319 09:17:39.188948 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:11.18892611 +0000 UTC m=+100.647541071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:17:39.962991 master-0 kubenswrapper[4035]: I0319 09:17:39.962696 4035 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="20b5e36de175a38e8938a8e709cd8fa1a5177137ac9ceff4b103028234492d38" exitCode=0 Mar 19 09:17:39.962991 master-0 kubenswrapper[4035]: I0319 09:17:39.962871 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerDied","Data":"20b5e36de175a38e8938a8e709cd8fa1a5177137ac9ceff4b103028234492d38"} Mar 19 09:17:39.964641 master-0 kubenswrapper[4035]: I0319 09:17:39.964601 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8pt59" event={"ID":"09cc190d-5647-40a1-bfe9-5355bcb33b10","Type":"ContainerStarted","Data":"c18a0711ecfede87bcb059f8520d97a687865b8690645e4cc8502d4b4e53e6ae"} Mar 19 09:17:39.966695 master-0 kubenswrapper[4035]: I0319 09:17:39.966655 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" event={"ID":"1187ddcd-3b78-4b3f-9b12-06ce76cb6040","Type":"ContainerStarted","Data":"d958d33a6f1260b8371eeab30f37b455f7b6aff40ef53ea8eaf316057e6668c0"} Mar 19 09:17:39.966750 master-0 kubenswrapper[4035]: I0319 09:17:39.966705 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" event={"ID":"1187ddcd-3b78-4b3f-9b12-06ce76cb6040","Type":"ContainerStarted","Data":"3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814"} Mar 19 09:17:39.967916 master-0 kubenswrapper[4035]: I0319 09:17:39.967882 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"8ee54fa7ada4c624d77b0e2f3dcbdb8c8d02973745fe65c2d05740e42c92ec9e"} Mar 19 09:17:40.333942 master-0 kubenswrapper[4035]: I0319 09:17:40.333891 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:40.334442 master-0 kubenswrapper[4035]: E0319 09:17:40.334040 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:41.228418 master-0 kubenswrapper[4035]: I0319 09:17:41.228349 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8pt59" podStartSLOduration=2.94764299 podStartE2EDuration="16.228329368s" podCreationTimestamp="2026-03-19 09:17:25 +0000 UTC" firstStartedPulling="2026-03-19 09:17:25.952076404 +0000 UTC m=+55.410691345" lastFinishedPulling="2026-03-19 09:17:39.232762782 +0000 UTC m=+68.691377723" observedRunningTime="2026-03-19 09:17:40.000485281 +0000 UTC m=+69.459100232" watchObservedRunningTime="2026-03-19 09:17:41.228329368 +0000 UTC m=+70.686944309" Mar 19 09:17:41.228974 master-0 kubenswrapper[4035]: I0319 09:17:41.228953 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-lql9l"] Mar 19 09:17:41.229945 master-0 kubenswrapper[4035]: I0319 09:17:41.229221 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:41.229945 master-0 kubenswrapper[4035]: E0319 09:17:41.229269 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:41.305375 master-0 kubenswrapper[4035]: I0319 09:17:41.305271 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:41.406044 master-0 kubenswrapper[4035]: I0319 09:17:41.405995 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:41.563775 master-0 kubenswrapper[4035]: E0319 09:17:41.563463 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 09:17:41.563775 master-0 kubenswrapper[4035]: E0319 09:17:41.563501 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 09:17:41.563775 master-0 kubenswrapper[4035]: E0319 09:17:41.563514 4035 projected.go:194] Error preparing data for projected volume kube-api-access-k6t9w for pod openshift-network-diagnostics/network-check-target-lql9l: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:41.563775 master-0 kubenswrapper[4035]: E0319 09:17:41.563593 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w podName:6cc45721-c05b-4161-91d9-d65cf6ec61d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:42.063575292 +0000 UTC m=+71.522190233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k6t9w" (UniqueName: "kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w") pod "network-check-target-lql9l" (UID: "6cc45721-c05b-4161-91d9-d65cf6ec61d4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:42.110905 master-0 kubenswrapper[4035]: I0319 09:17:42.110849 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:42.111084 master-0 kubenswrapper[4035]: E0319 09:17:42.111034 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 09:17:42.111117 master-0 kubenswrapper[4035]: E0319 09:17:42.111087 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 09:17:42.111117 master-0 kubenswrapper[4035]: E0319 09:17:42.111099 4035 projected.go:194] Error preparing data for projected volume kube-api-access-k6t9w for pod openshift-network-diagnostics/network-check-target-lql9l: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:42.111191 master-0 kubenswrapper[4035]: E0319 09:17:42.111154 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w podName:6cc45721-c05b-4161-91d9-d65cf6ec61d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:43.111138584 +0000 UTC m=+72.569753525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-k6t9w" (UniqueName: "kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w") pod "network-check-target-lql9l" (UID: "6cc45721-c05b-4161-91d9-d65cf6ec61d4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:42.334323 master-0 kubenswrapper[4035]: I0319 09:17:42.334268 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:42.334570 master-0 kubenswrapper[4035]: I0319 09:17:42.334273 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:42.334615 master-0 kubenswrapper[4035]: E0319 09:17:42.334593 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:42.334676 master-0 kubenswrapper[4035]: E0319 09:17:42.334438 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:43.015478 master-0 kubenswrapper[4035]: I0319 09:17:43.015365 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:43.015478 master-0 kubenswrapper[4035]: E0319 09:17:43.015503 4035 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:43.016455 master-0 kubenswrapper[4035]: E0319 09:17:43.015570 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:59.015535601 +0000 UTC m=+88.474150542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:43.115809 master-0 kubenswrapper[4035]: I0319 09:17:43.115742 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:43.115974 master-0 kubenswrapper[4035]: E0319 09:17:43.115918 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 09:17:43.115974 master-0 kubenswrapper[4035]: E0319 09:17:43.115942 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 09:17:43.115974 master-0 kubenswrapper[4035]: E0319 09:17:43.115952 4035 projected.go:194] Error preparing data for projected volume kube-api-access-k6t9w for pod openshift-network-diagnostics/network-check-target-lql9l: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:43.116074 master-0 kubenswrapper[4035]: E0319 09:17:43.115995 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w podName:6cc45721-c05b-4161-91d9-d65cf6ec61d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:45.115982461 +0000 UTC m=+74.574597402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-k6t9w" (UniqueName: "kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w") pod "network-check-target-lql9l" (UID: "6cc45721-c05b-4161-91d9-d65cf6ec61d4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:44.334735 master-0 kubenswrapper[4035]: I0319 09:17:44.334181 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:44.334735 master-0 kubenswrapper[4035]: I0319 09:17:44.334199 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:44.334735 master-0 kubenswrapper[4035]: E0319 09:17:44.334316 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:44.334735 master-0 kubenswrapper[4035]: E0319 09:17:44.334402 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:44.637717 master-0 kubenswrapper[4035]: I0319 09:17:44.637667 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-t7zwh"] Mar 19 09:17:44.638183 master-0 kubenswrapper[4035]: I0319 09:17:44.638154 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.640240 master-0 kubenswrapper[4035]: I0319 09:17:44.639991 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 09:17:44.640240 master-0 kubenswrapper[4035]: I0319 09:17:44.640019 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 09:17:44.640240 master-0 kubenswrapper[4035]: I0319 09:17:44.640028 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 09:17:44.640410 master-0 kubenswrapper[4035]: I0319 09:17:44.640385 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 09:17:44.648202 master-0 kubenswrapper[4035]: I0319 09:17:44.647827 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 09:17:44.832605 master-0 kubenswrapper[4035]: I0319 09:17:44.832521 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.832815 master-0 kubenswrapper[4035]: I0319 09:17:44.832615 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpcnv\" (UniqueName: \"kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.832815 master-0 kubenswrapper[4035]: I0319 09:17:44.832661 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.832815 master-0 kubenswrapper[4035]: I0319 09:17:44.832678 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.934800 master-0 kubenswrapper[4035]: I0319 09:17:44.933800 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.934800 master-0 kubenswrapper[4035]: I0319 09:17:44.933875 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpcnv\" (UniqueName: \"kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.934800 master-0 kubenswrapper[4035]: E0319 09:17:44.933918 4035 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 19 09:17:44.934800 master-0 kubenswrapper[4035]: I0319 09:17:44.933917 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.934800 master-0 kubenswrapper[4035]: E0319 09:17:44.933973 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert podName:47da8964-3606-4181-87fb-8f04a3065295 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:45.43395799 +0000 UTC m=+74.892572921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert") pod "network-node-identity-t7zwh" (UID: "47da8964-3606-4181-87fb-8f04a3065295") : secret "network-node-identity-cert" not found Mar 19 09:17:44.934800 master-0 kubenswrapper[4035]: I0319 09:17:44.933989 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.934800 master-0 kubenswrapper[4035]: I0319 09:17:44.934712 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.935301 master-0 kubenswrapper[4035]: I0319 09:17:44.935261 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.964507 master-0 kubenswrapper[4035]: I0319 09:17:44.964473 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpcnv\" (UniqueName: \"kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:44.981383 master-0 kubenswrapper[4035]: I0319 09:17:44.981343 4035 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="86d7bf6f8a152beed53ca9a59153f0d5628c8aeeca38c4e7133940d1c9f346af" exitCode=0 Mar 19 09:17:44.981531 master-0 kubenswrapper[4035]: I0319 09:17:44.981384 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerDied","Data":"86d7bf6f8a152beed53ca9a59153f0d5628c8aeeca38c4e7133940d1c9f346af"} Mar 19 09:17:45.135580 master-0 kubenswrapper[4035]: I0319 09:17:45.135418 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:45.136146 master-0 kubenswrapper[4035]: E0319 09:17:45.136105 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 09:17:45.136146 master-0 kubenswrapper[4035]: E0319 09:17:45.136143 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 09:17:45.136255 master-0 kubenswrapper[4035]: E0319 09:17:45.136158 4035 projected.go:194] Error preparing data for projected volume kube-api-access-k6t9w for pod openshift-network-diagnostics/network-check-target-lql9l: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:45.136255 master-0 kubenswrapper[4035]: E0319 09:17:45.136204 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w podName:6cc45721-c05b-4161-91d9-d65cf6ec61d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:49.136185638 +0000 UTC m=+78.594800589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-k6t9w" (UniqueName: "kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w") pod "network-check-target-lql9l" (UID: "6cc45721-c05b-4161-91d9-d65cf6ec61d4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:45.436993 master-0 kubenswrapper[4035]: I0319 09:17:45.436926 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:45.441850 master-0 kubenswrapper[4035]: I0319 09:17:45.441802 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:45.550912 master-0 kubenswrapper[4035]: I0319 09:17:45.550818 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:17:45.561948 master-0 kubenswrapper[4035]: W0319 09:17:45.561898 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47da8964_3606_4181_87fb_8f04a3065295.slice/crio-2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d WatchSource:0}: Error finding container 2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d: Status 404 returned error can't find the container with id 2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d Mar 19 09:17:45.984695 master-0 kubenswrapper[4035]: I0319 09:17:45.984625 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-t7zwh" event={"ID":"47da8964-3606-4181-87fb-8f04a3065295","Type":"ContainerStarted","Data":"2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d"} Mar 19 09:17:46.334307 master-0 kubenswrapper[4035]: I0319 09:17:46.334188 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:46.334456 master-0 kubenswrapper[4035]: E0319 09:17:46.334323 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:46.334456 master-0 kubenswrapper[4035]: I0319 09:17:46.334373 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:46.334579 master-0 kubenswrapper[4035]: E0319 09:17:46.334498 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:48.333872 master-0 kubenswrapper[4035]: I0319 09:17:48.333821 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:48.333872 master-0 kubenswrapper[4035]: I0319 09:17:48.333836 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:48.335752 master-0 kubenswrapper[4035]: E0319 09:17:48.334040 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:48.335752 master-0 kubenswrapper[4035]: E0319 09:17:48.334108 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:48.344948 master-0 kubenswrapper[4035]: I0319 09:17:48.344807 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 09:17:49.165444 master-0 kubenswrapper[4035]: I0319 09:17:49.165374 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:49.165672 master-0 kubenswrapper[4035]: E0319 09:17:49.165525 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 09:17:49.165672 master-0 kubenswrapper[4035]: E0319 09:17:49.165559 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 09:17:49.165672 master-0 kubenswrapper[4035]: E0319 09:17:49.165571 4035 projected.go:194] Error preparing data for projected volume kube-api-access-k6t9w for pod openshift-network-diagnostics/network-check-target-lql9l: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:49.165672 master-0 kubenswrapper[4035]: E0319 09:17:49.165620 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w podName:6cc45721-c05b-4161-91d9-d65cf6ec61d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:17:57.165606056 +0000 UTC m=+86.624220997 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-k6t9w" (UniqueName: "kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w") pod "network-check-target-lql9l" (UID: "6cc45721-c05b-4161-91d9-d65cf6ec61d4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:50.333980 master-0 kubenswrapper[4035]: I0319 09:17:50.333927 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:50.334473 master-0 kubenswrapper[4035]: I0319 09:17:50.333943 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:50.334473 master-0 kubenswrapper[4035]: E0319 09:17:50.334164 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:50.334473 master-0 kubenswrapper[4035]: E0319 09:17:50.334047 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:51.354596 master-0 kubenswrapper[4035]: I0319 09:17:51.354513 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=3.354495855 podStartE2EDuration="3.354495855s" podCreationTimestamp="2026-03-19 09:17:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:17:51.354348691 +0000 UTC m=+80.812963632" watchObservedRunningTime="2026-03-19 09:17:51.354495855 +0000 UTC m=+80.813110796" Mar 19 09:17:52.334175 master-0 kubenswrapper[4035]: I0319 09:17:52.334112 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:52.334175 master-0 kubenswrapper[4035]: I0319 09:17:52.334150 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:52.334395 master-0 kubenswrapper[4035]: E0319 09:17:52.334228 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:52.334395 master-0 kubenswrapper[4035]: E0319 09:17:52.334311 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:53.747397 master-0 kubenswrapper[4035]: I0319 09:17:53.747358 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 09:17:54.334038 master-0 kubenswrapper[4035]: I0319 09:17:54.333562 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:54.334038 master-0 kubenswrapper[4035]: I0319 09:17:54.333585 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:54.334038 master-0 kubenswrapper[4035]: E0319 09:17:54.333674 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:54.334038 master-0 kubenswrapper[4035]: E0319 09:17:54.333831 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:56.333711 master-0 kubenswrapper[4035]: I0319 09:17:56.333660 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:56.334167 master-0 kubenswrapper[4035]: E0319 09:17:56.333789 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:56.334167 master-0 kubenswrapper[4035]: I0319 09:17:56.333662 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:56.334167 master-0 kubenswrapper[4035]: E0319 09:17:56.333905 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:57.231335 master-0 kubenswrapper[4035]: I0319 09:17:57.231249 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:57.231576 master-0 kubenswrapper[4035]: E0319 09:17:57.231434 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 09:17:57.231576 master-0 kubenswrapper[4035]: E0319 09:17:57.231461 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 09:17:57.231576 master-0 kubenswrapper[4035]: E0319 09:17:57.231479 4035 projected.go:194] Error preparing data for projected volume kube-api-access-k6t9w for pod openshift-network-diagnostics/network-check-target-lql9l: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:57.231676 master-0 kubenswrapper[4035]: E0319 09:17:57.231586 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w podName:6cc45721-c05b-4161-91d9-d65cf6ec61d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:13.231529124 +0000 UTC m=+102.690144105 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-k6t9w" (UniqueName: "kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w") pod "network-check-target-lql9l" (UID: "6cc45721-c05b-4161-91d9-d65cf6ec61d4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:17:58.334009 master-0 kubenswrapper[4035]: I0319 09:17:58.333970 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:58.334466 master-0 kubenswrapper[4035]: I0319 09:17:58.334023 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:17:58.334466 master-0 kubenswrapper[4035]: E0319 09:17:58.334101 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:17:58.334466 master-0 kubenswrapper[4035]: E0319 09:17:58.334149 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:17:59.047084 master-0 kubenswrapper[4035]: I0319 09:17:59.046965 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:17:59.047300 master-0 kubenswrapper[4035]: E0319 09:17:59.047107 4035 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:17:59.047300 master-0 kubenswrapper[4035]: E0319 09:17:59.047182 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:31.047162575 +0000 UTC m=+120.505777536 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 19 09:18:00.020403 master-0 kubenswrapper[4035]: I0319 09:18:00.019995 4035 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="58f2d035e725f793e501aa00d5cd6dec60187d755b95ed0332885f977a2d1232" exitCode=0 Mar 19 09:18:00.020403 master-0 kubenswrapper[4035]: I0319 09:18:00.020061 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerDied","Data":"58f2d035e725f793e501aa00d5cd6dec60187d755b95ed0332885f977a2d1232"} Mar 19 09:18:00.021906 master-0 kubenswrapper[4035]: I0319 09:18:00.021859 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-t7zwh" event={"ID":"47da8964-3606-4181-87fb-8f04a3065295","Type":"ContainerStarted","Data":"9b3fc8a626e0487acce62c5d3181f8201f7287976a42754235b1309dbd2babb2"} Mar 19 09:18:00.021906 master-0 kubenswrapper[4035]: I0319 09:18:00.021900 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-t7zwh" event={"ID":"47da8964-3606-4181-87fb-8f04a3065295","Type":"ContainerStarted","Data":"068c492ecbc5e09d161a02452fc6cb85e031c340fc08053d4ac664bbaa43e5ed"} Mar 19 09:18:00.025924 master-0 kubenswrapper[4035]: I0319 09:18:00.025885 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" event={"ID":"1187ddcd-3b78-4b3f-9b12-06ce76cb6040","Type":"ContainerStarted","Data":"fcb63173a1674e9ce9fc5d4b055442992b282a4bd8e174a8bafa997bfbff21e0"} Mar 19 09:18:00.027735 master-0 kubenswrapper[4035]: I0319 09:18:00.027710 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b" exitCode=0 Mar 19 09:18:00.027786 master-0 kubenswrapper[4035]: I0319 09:18:00.027742 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} Mar 19 09:18:00.074608 master-0 kubenswrapper[4035]: I0319 09:18:00.074524 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=7.074503569 podStartE2EDuration="7.074503569s" podCreationTimestamp="2026-03-19 09:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:00.052908767 +0000 UTC m=+89.511523708" watchObservedRunningTime="2026-03-19 09:18:00.074503569 +0000 UTC m=+89.533118510" Mar 19 09:18:00.099560 master-0 kubenswrapper[4035]: I0319 09:18:00.099464 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" podStartSLOduration=2.167758207 podStartE2EDuration="22.099444536s" podCreationTimestamp="2026-03-19 09:17:38 +0000 UTC" firstStartedPulling="2026-03-19 09:17:39.312460542 +0000 UTC m=+68.771075483" lastFinishedPulling="2026-03-19 09:17:59.244146851 +0000 UTC m=+88.702761812" observedRunningTime="2026-03-19 09:18:00.086300978 +0000 UTC m=+89.544915959" watchObservedRunningTime="2026-03-19 09:18:00.099444536 +0000 UTC m=+89.558059477" Mar 19 09:18:00.100720 master-0 kubenswrapper[4035]: I0319 09:18:00.100320 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-t7zwh" podStartSLOduration=2.415557212 podStartE2EDuration="16.100313191s" podCreationTimestamp="2026-03-19 09:17:44 +0000 UTC" firstStartedPulling="2026-03-19 09:17:45.564913621 +0000 UTC m=+75.023528572" lastFinishedPulling="2026-03-19 09:17:59.24966957 +0000 UTC m=+88.708284551" observedRunningTime="2026-03-19 09:18:00.099183399 +0000 UTC m=+89.557798340" watchObservedRunningTime="2026-03-19 09:18:00.100313191 +0000 UTC m=+89.558928142" Mar 19 09:18:00.334169 master-0 kubenswrapper[4035]: I0319 09:18:00.333886 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:00.334458 master-0 kubenswrapper[4035]: E0319 09:18:00.334435 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:00.334564 master-0 kubenswrapper[4035]: I0319 09:18:00.333912 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:00.334696 master-0 kubenswrapper[4035]: E0319 09:18:00.334682 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:01.035512 master-0 kubenswrapper[4035]: I0319 09:18:01.035410 4035 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="c8bff62b4e05425e80c7e14b2ad4d089fe60c7b7e27feb3cfc2b1fde8c062902" exitCode=0 Mar 19 09:18:01.036712 master-0 kubenswrapper[4035]: I0319 09:18:01.035579 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerDied","Data":"c8bff62b4e05425e80c7e14b2ad4d089fe60c7b7e27feb3cfc2b1fde8c062902"} Mar 19 09:18:01.042365 master-0 kubenswrapper[4035]: I0319 09:18:01.042229 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} Mar 19 09:18:01.042365 master-0 kubenswrapper[4035]: I0319 09:18:01.042290 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} Mar 19 09:18:01.042365 master-0 kubenswrapper[4035]: I0319 09:18:01.042304 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} Mar 19 09:18:01.042365 master-0 kubenswrapper[4035]: I0319 09:18:01.042317 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} Mar 19 09:18:01.042365 master-0 kubenswrapper[4035]: I0319 09:18:01.042330 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} Mar 19 09:18:01.042365 master-0 kubenswrapper[4035]: I0319 09:18:01.042344 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} Mar 19 09:18:01.343319 master-0 kubenswrapper[4035]: W0319 09:18:01.343060 4035 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 19 09:18:01.344403 master-0 kubenswrapper[4035]: I0319 09:18:01.344354 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 09:18:02.050893 master-0 kubenswrapper[4035]: I0319 09:18:02.050784 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" event={"ID":"60683578-6673-4aff-b1d5-3167d534ac08","Type":"ContainerStarted","Data":"5608142eb85d89ac995c9b4593dca974efaabcfb6116776469472bae5264d505"} Mar 19 09:18:02.072963 master-0 kubenswrapper[4035]: I0319 09:18:02.072681 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jzj4h" podStartSLOduration=3.936688572 podStartE2EDuration="37.072653s" podCreationTimestamp="2026-03-19 09:17:25 +0000 UTC" firstStartedPulling="2026-03-19 09:17:26.152681233 +0000 UTC m=+55.611296184" lastFinishedPulling="2026-03-19 09:17:59.288645651 +0000 UTC m=+88.747260612" observedRunningTime="2026-03-19 09:18:02.072390233 +0000 UTC m=+91.531005224" watchObservedRunningTime="2026-03-19 09:18:02.072653 +0000 UTC m=+91.531267971" Mar 19 09:18:02.085848 master-0 kubenswrapper[4035]: I0319 09:18:02.085737 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=1.085704666 podStartE2EDuration="1.085704666s" podCreationTimestamp="2026-03-19 09:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:02.085075377 +0000 UTC m=+91.543690348" watchObservedRunningTime="2026-03-19 09:18:02.085704666 +0000 UTC m=+91.544319677" Mar 19 09:18:02.334830 master-0 kubenswrapper[4035]: I0319 09:18:02.334635 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:02.334830 master-0 kubenswrapper[4035]: I0319 09:18:02.334666 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:02.335125 master-0 kubenswrapper[4035]: E0319 09:18:02.334907 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:02.335125 master-0 kubenswrapper[4035]: E0319 09:18:02.335095 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:03.058376 master-0 kubenswrapper[4035]: I0319 09:18:03.058221 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} Mar 19 09:18:04.062565 master-0 kubenswrapper[4035]: I0319 09:18:04.062224 4035 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6q4bt"] Mar 19 09:18:04.334292 master-0 kubenswrapper[4035]: I0319 09:18:04.334098 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:04.334292 master-0 kubenswrapper[4035]: E0319 09:18:04.334210 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:04.334292 master-0 kubenswrapper[4035]: I0319 09:18:04.334098 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:04.334732 master-0 kubenswrapper[4035]: E0319 09:18:04.334342 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076291 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerStarted","Data":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076511 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-controller" containerID="cri-o://fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" gracePeriod=30 Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076629 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" gracePeriod=30 Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076716 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-node" containerID="cri-o://bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" gracePeriod=30 Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076712 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="nbdb" containerID="cri-o://8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" gracePeriod=30 Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076790 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-acl-logging" containerID="cri-o://96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" gracePeriod=30 Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076708 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076882 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076826 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="northd" containerID="cri-o://dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" gracePeriod=30 Mar 19 09:18:06.076977 master-0 kubenswrapper[4035]: I0319 09:18:06.076950 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:18:06.079394 master-0 kubenswrapper[4035]: I0319 09:18:06.077054 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="sbdb" containerID="cri-o://9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" gracePeriod=30 Mar 19 09:18:06.081998 master-0 kubenswrapper[4035]: E0319 09:18:06.081772 4035 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 19 09:18:06.081998 master-0 kubenswrapper[4035]: E0319 09:18:06.081920 4035 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 19 09:18:06.084251 master-0 kubenswrapper[4035]: E0319 09:18:06.084143 4035 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 19 09:18:06.088431 master-0 kubenswrapper[4035]: E0319 09:18:06.087862 4035 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 19 09:18:06.090153 master-0 kubenswrapper[4035]: E0319 09:18:06.089888 4035 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 19 09:18:06.090153 master-0 kubenswrapper[4035]: E0319 09:18:06.089974 4035 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="sbdb" Mar 19 09:18:06.091740 master-0 kubenswrapper[4035]: E0319 09:18:06.091372 4035 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 19 09:18:06.091740 master-0 kubenswrapper[4035]: E0319 09:18:06.091490 4035 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="nbdb" Mar 19 09:18:06.107106 master-0 kubenswrapper[4035]: I0319 09:18:06.106987 4035 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovnkube-controller" containerID="cri-o://6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" gracePeriod=30 Mar 19 09:18:06.119515 master-0 kubenswrapper[4035]: I0319 09:18:06.118418 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" podStartSLOduration=8.009779081 podStartE2EDuration="28.118397896s" podCreationTimestamp="2026-03-19 09:17:38 +0000 UTC" firstStartedPulling="2026-03-19 09:17:39.140146369 +0000 UTC m=+68.598761310" lastFinishedPulling="2026-03-19 09:17:59.248765174 +0000 UTC m=+88.707380125" observedRunningTime="2026-03-19 09:18:06.117908102 +0000 UTC m=+95.576523123" watchObservedRunningTime="2026-03-19 09:18:06.118397896 +0000 UTC m=+95.577012857" Mar 19 09:18:06.333700 master-0 kubenswrapper[4035]: I0319 09:18:06.333572 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:06.333875 master-0 kubenswrapper[4035]: I0319 09:18:06.333784 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:06.333980 master-0 kubenswrapper[4035]: E0319 09:18:06.333924 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:06.334154 master-0 kubenswrapper[4035]: E0319 09:18:06.334073 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:06.746744 master-0 kubenswrapper[4035]: I0319 09:18:06.746673 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/ovnkube-controller/0.log" Mar 19 09:18:06.749201 master-0 kubenswrapper[4035]: I0319 09:18:06.749173 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/kube-rbac-proxy-ovn-metrics/0.log" Mar 19 09:18:06.750071 master-0 kubenswrapper[4035]: I0319 09:18:06.750045 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/kube-rbac-proxy-node/0.log" Mar 19 09:18:06.750959 master-0 kubenswrapper[4035]: I0319 09:18:06.750926 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/ovn-acl-logging/0.log" Mar 19 09:18:06.751772 master-0 kubenswrapper[4035]: I0319 09:18:06.751748 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/ovn-controller/0.log" Mar 19 09:18:06.752477 master-0 kubenswrapper[4035]: I0319 09:18:06.752454 4035 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:18:06.811577 master-0 kubenswrapper[4035]: I0319 09:18:06.811477 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-node-log\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811577 master-0 kubenswrapper[4035]: I0319 09:18:06.811574 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-netns\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811622 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-systemd\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811653 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-ovn\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811692 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-etc-openvswitch\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811724 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-netd\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811677 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-node-log" (OuterVolumeSpecName: "node-log") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811727 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811754 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-kubelet\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811830 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811856 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5vw8\" (UniqueName: \"kubernetes.io/projected/4da90d03-46bd-41be-9224-0c63b31c535c-kube-api-access-h5vw8\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811861 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811929 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.811938 master-0 kubenswrapper[4035]: I0319 09:18:06.811894 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812013 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-var-lib-openvswitch\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812061 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-openvswitch\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812113 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-env-overrides\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812161 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4da90d03-46bd-41be-9224-0c63b31c535c-ovn-node-metrics-cert\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812211 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-systemd-units\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812255 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-config\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812296 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-slash\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812337 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-bin\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812386 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-script-lib\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812427 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-log-socket\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812466 4035 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-ovn-kubernetes\") pod \"4da90d03-46bd-41be-9224-0c63b31c535c\" (UID: \"4da90d03-46bd-41be-9224-0c63b31c535c\") " Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.811887 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812656 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812715 4035 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812748 4035 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812774 4035 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.812789 master-0 kubenswrapper[4035]: I0319 09:18:06.812776 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.812809 4035 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-node-log\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.812858 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.812888 4035 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.812941 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.813013 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.813063 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-slash" (OuterVolumeSpecName: "host-slash") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.813470 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.813524 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.813637 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-log-socket" (OuterVolumeSpecName: "log-socket") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.813643 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.814485 master-0 kubenswrapper[4035]: I0319 09:18:06.814164 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.823909 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da90d03-46bd-41be-9224-0c63b31c535c-kube-api-access-h5vw8" (OuterVolumeSpecName: "kube-api-access-h5vw8") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "kube-api-access-h5vw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.824819 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4da90d03-46bd-41be-9224-0c63b31c535c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.826740 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zmrpw"] Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.826878 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-ovn-metrics" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.826901 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-ovn-metrics" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.826917 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="northd" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.826930 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="northd" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.826945 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-acl-logging" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.826959 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-acl-logging" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.826973 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="nbdb" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.826986 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="nbdb" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.827001 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovnkube-controller" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827015 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovnkube-controller" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.827030 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-controller" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827042 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-controller" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.827056 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-node" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827068 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-node" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.827084 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kubecfg-setup" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827097 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kubecfg-setup" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: E0319 09:18:06.827111 4035 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="sbdb" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827125 4035 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="sbdb" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827197 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-controller" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827218 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="nbdb" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827237 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-node" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827251 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovn-acl-logging" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827290 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="northd" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827306 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="kube-rbac-proxy-ovn-metrics" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827320 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="sbdb" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827332 4035 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" containerName="ovnkube-controller" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.827860 4035 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4da90d03-46bd-41be-9224-0c63b31c535c" (UID: "4da90d03-46bd-41be-9224-0c63b31c535c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:06.837320 master-0 kubenswrapper[4035]: I0319 09:18:06.828468 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913224 master-0 kubenswrapper[4035]: I0319 09:18:06.913096 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913224 master-0 kubenswrapper[4035]: I0319 09:18:06.913151 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913224 master-0 kubenswrapper[4035]: I0319 09:18:06.913179 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913224 master-0 kubenswrapper[4035]: I0319 09:18:06.913209 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfnn\" (UniqueName: \"kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913262 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913282 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913300 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913321 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913342 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913362 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913384 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913405 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913439 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913458 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913479 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913497 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913514 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913533 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913585 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.913623 master-0 kubenswrapper[4035]: I0319 09:18:06.913603 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913853 4035 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913890 4035 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913905 4035 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913917 4035 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913931 4035 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913950 4035 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913961 4035 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913976 4035 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913987 4035 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.913998 4035 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.914010 4035 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5vw8\" (UniqueName: \"kubernetes.io/projected/4da90d03-46bd-41be-9224-0c63b31c535c-kube-api-access-h5vw8\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.914022 4035 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.914033 4035 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4da90d03-46bd-41be-9224-0c63b31c535c-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.914044 4035 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4da90d03-46bd-41be-9224-0c63b31c535c-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:06.914421 master-0 kubenswrapper[4035]: I0319 09:18:06.914055 4035 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4da90d03-46bd-41be-9224-0c63b31c535c-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:07.015582 master-0 kubenswrapper[4035]: I0319 09:18:07.015403 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.015582 master-0 kubenswrapper[4035]: I0319 09:18:07.015451 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.015582 master-0 kubenswrapper[4035]: I0319 09:18:07.015481 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.015582 master-0 kubenswrapper[4035]: I0319 09:18:07.015500 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.015582 master-0 kubenswrapper[4035]: I0319 09:18:07.015520 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfnn\" (UniqueName: \"kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.015582 master-0 kubenswrapper[4035]: I0319 09:18:07.015559 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.015582 master-0 kubenswrapper[4035]: I0319 09:18:07.015582 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015688 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015792 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015868 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015901 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015935 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015959 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015801 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.015980 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.016020 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.016018 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.016070 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.016077 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.016186 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.016319 master-0 kubenswrapper[4035]: I0319 09:18:07.016326 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016379 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016387 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016445 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016483 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016577 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016605 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016581 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016628 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016654 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016676 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016658 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016718 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016756 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016764 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.016788 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.017373 master-0 kubenswrapper[4035]: I0319 09:18:07.017332 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.018522 master-0 kubenswrapper[4035]: I0319 09:18:07.018026 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.021816 master-0 kubenswrapper[4035]: I0319 09:18:07.021748 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.044639 master-0 kubenswrapper[4035]: I0319 09:18:07.044521 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfnn\" (UniqueName: \"kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.083144 master-0 kubenswrapper[4035]: I0319 09:18:07.083077 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/ovnkube-controller/0.log" Mar 19 09:18:07.085972 master-0 kubenswrapper[4035]: I0319 09:18:07.085926 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/kube-rbac-proxy-ovn-metrics/0.log" Mar 19 09:18:07.086808 master-0 kubenswrapper[4035]: I0319 09:18:07.086672 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/kube-rbac-proxy-node/0.log" Mar 19 09:18:07.087612 master-0 kubenswrapper[4035]: I0319 09:18:07.087523 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/ovn-acl-logging/0.log" Mar 19 09:18:07.088428 master-0 kubenswrapper[4035]: I0319 09:18:07.088380 4035 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6q4bt_4da90d03-46bd-41be-9224-0c63b31c535c/ovn-controller/0.log" Mar 19 09:18:07.088975 master-0 kubenswrapper[4035]: I0319 09:18:07.088870 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" exitCode=1 Mar 19 09:18:07.088975 master-0 kubenswrapper[4035]: I0319 09:18:07.088912 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" exitCode=0 Mar 19 09:18:07.088975 master-0 kubenswrapper[4035]: I0319 09:18:07.088922 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" exitCode=0 Mar 19 09:18:07.088975 master-0 kubenswrapper[4035]: I0319 09:18:07.088945 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" exitCode=0 Mar 19 09:18:07.088975 master-0 kubenswrapper[4035]: I0319 09:18:07.088960 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" exitCode=143 Mar 19 09:18:07.088975 master-0 kubenswrapper[4035]: I0319 09:18:07.088969 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" exitCode=143 Mar 19 09:18:07.088975 master-0 kubenswrapper[4035]: I0319 09:18:07.088986 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" exitCode=143 Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.088998 4035 generic.go:334] "Generic (PLEG): container finished" podID="4da90d03-46bd-41be-9224-0c63b31c535c" containerID="fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" exitCode=143 Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.088975 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089022 4035 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089049 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089075 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089091 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089106 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089117 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089239 4035 scope.go:117] "RemoveContainer" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089130 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089437 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} Mar 19 09:18:07.089427 master-0 kubenswrapper[4035]: I0319 09:18:07.089449 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089463 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089477 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089486 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089494 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089501 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089508 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089515 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089521 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089528 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089534 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089567 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089578 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089586 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089593 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089600 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089607 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089613 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089621 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089628 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089635 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089644 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6q4bt" event={"ID":"4da90d03-46bd-41be-9224-0c63b31c535c","Type":"ContainerDied","Data":"8ee54fa7ada4c624d77b0e2f3dcbdb8c8d02973745fe65c2d05740e42c92ec9e"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089654 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089663 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089670 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089676 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089683 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089690 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} Mar 19 09:18:07.090098 master-0 kubenswrapper[4035]: I0319 09:18:07.089697 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} Mar 19 09:18:07.091733 master-0 kubenswrapper[4035]: I0319 09:18:07.089704 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} Mar 19 09:18:07.091733 master-0 kubenswrapper[4035]: I0319 09:18:07.089710 4035 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} Mar 19 09:18:07.109671 master-0 kubenswrapper[4035]: I0319 09:18:07.109613 4035 scope.go:117] "RemoveContainer" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" Mar 19 09:18:07.122014 master-0 kubenswrapper[4035]: I0319 09:18:07.121960 4035 scope.go:117] "RemoveContainer" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" Mar 19 09:18:07.133038 master-0 kubenswrapper[4035]: I0319 09:18:07.132951 4035 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6q4bt"] Mar 19 09:18:07.139427 master-0 kubenswrapper[4035]: I0319 09:18:07.139346 4035 scope.go:117] "RemoveContainer" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" Mar 19 09:18:07.139946 master-0 kubenswrapper[4035]: I0319 09:18:07.139372 4035 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6q4bt"] Mar 19 09:18:07.145778 master-0 kubenswrapper[4035]: I0319 09:18:07.145688 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:07.157774 master-0 kubenswrapper[4035]: I0319 09:18:07.157730 4035 scope.go:117] "RemoveContainer" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" Mar 19 09:18:07.166978 master-0 kubenswrapper[4035]: W0319 09:18:07.166875 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdcc18f9_66cf_45d9_965d_d0a57fcf285c.slice/crio-71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334 WatchSource:0}: Error finding container 71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334: Status 404 returned error can't find the container with id 71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334 Mar 19 09:18:07.177611 master-0 kubenswrapper[4035]: I0319 09:18:07.177577 4035 scope.go:117] "RemoveContainer" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" Mar 19 09:18:07.190330 master-0 kubenswrapper[4035]: I0319 09:18:07.190290 4035 scope.go:117] "RemoveContainer" containerID="96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" Mar 19 09:18:07.217198 master-0 kubenswrapper[4035]: I0319 09:18:07.217164 4035 scope.go:117] "RemoveContainer" containerID="fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" Mar 19 09:18:07.240712 master-0 kubenswrapper[4035]: I0319 09:18:07.240677 4035 scope.go:117] "RemoveContainer" containerID="51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b" Mar 19 09:18:07.254165 master-0 kubenswrapper[4035]: I0319 09:18:07.254098 4035 scope.go:117] "RemoveContainer" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" Mar 19 09:18:07.254737 master-0 kubenswrapper[4035]: E0319 09:18:07.254693 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": container with ID starting with 6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c not found: ID does not exist" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" Mar 19 09:18:07.254798 master-0 kubenswrapper[4035]: I0319 09:18:07.254740 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} err="failed to get container status \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": rpc error: code = NotFound desc = could not find container \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": container with ID starting with 6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c not found: ID does not exist" Mar 19 09:18:07.254798 master-0 kubenswrapper[4035]: I0319 09:18:07.254777 4035 scope.go:117] "RemoveContainer" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" Mar 19 09:18:07.255132 master-0 kubenswrapper[4035]: E0319 09:18:07.255103 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": container with ID starting with 9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7 not found: ID does not exist" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" Mar 19 09:18:07.255205 master-0 kubenswrapper[4035]: I0319 09:18:07.255133 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} err="failed to get container status \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": rpc error: code = NotFound desc = could not find container \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": container with ID starting with 9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7 not found: ID does not exist" Mar 19 09:18:07.255205 master-0 kubenswrapper[4035]: I0319 09:18:07.255150 4035 scope.go:117] "RemoveContainer" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" Mar 19 09:18:07.256064 master-0 kubenswrapper[4035]: E0319 09:18:07.255609 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": container with ID starting with 8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf not found: ID does not exist" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" Mar 19 09:18:07.256064 master-0 kubenswrapper[4035]: I0319 09:18:07.255718 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} err="failed to get container status \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": rpc error: code = NotFound desc = could not find container \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": container with ID starting with 8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf not found: ID does not exist" Mar 19 09:18:07.256064 master-0 kubenswrapper[4035]: I0319 09:18:07.255762 4035 scope.go:117] "RemoveContainer" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" Mar 19 09:18:07.256415 master-0 kubenswrapper[4035]: E0319 09:18:07.256236 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": container with ID starting with dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c not found: ID does not exist" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" Mar 19 09:18:07.256415 master-0 kubenswrapper[4035]: I0319 09:18:07.256286 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} err="failed to get container status \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": rpc error: code = NotFound desc = could not find container \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": container with ID starting with dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c not found: ID does not exist" Mar 19 09:18:07.256415 master-0 kubenswrapper[4035]: I0319 09:18:07.256323 4035 scope.go:117] "RemoveContainer" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" Mar 19 09:18:07.256732 master-0 kubenswrapper[4035]: E0319 09:18:07.256703 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": container with ID starting with 33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f not found: ID does not exist" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" Mar 19 09:18:07.256798 master-0 kubenswrapper[4035]: I0319 09:18:07.256732 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} err="failed to get container status \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": rpc error: code = NotFound desc = could not find container \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": container with ID starting with 33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f not found: ID does not exist" Mar 19 09:18:07.256798 master-0 kubenswrapper[4035]: I0319 09:18:07.256749 4035 scope.go:117] "RemoveContainer" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" Mar 19 09:18:07.257051 master-0 kubenswrapper[4035]: E0319 09:18:07.257027 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": container with ID starting with bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77 not found: ID does not exist" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" Mar 19 09:18:07.257100 master-0 kubenswrapper[4035]: I0319 09:18:07.257047 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} err="failed to get container status \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": rpc error: code = NotFound desc = could not find container \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": container with ID starting with bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77 not found: ID does not exist" Mar 19 09:18:07.257100 master-0 kubenswrapper[4035]: I0319 09:18:07.257063 4035 scope.go:117] "RemoveContainer" containerID="96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" Mar 19 09:18:07.257427 master-0 kubenswrapper[4035]: E0319 09:18:07.257383 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": container with ID starting with 96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598 not found: ID does not exist" containerID="96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" Mar 19 09:18:07.257483 master-0 kubenswrapper[4035]: I0319 09:18:07.257424 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} err="failed to get container status \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": rpc error: code = NotFound desc = could not find container \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": container with ID starting with 96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598 not found: ID does not exist" Mar 19 09:18:07.257483 master-0 kubenswrapper[4035]: I0319 09:18:07.257449 4035 scope.go:117] "RemoveContainer" containerID="fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" Mar 19 09:18:07.257855 master-0 kubenswrapper[4035]: E0319 09:18:07.257823 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": container with ID starting with fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175 not found: ID does not exist" containerID="fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" Mar 19 09:18:07.257910 master-0 kubenswrapper[4035]: I0319 09:18:07.257850 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} err="failed to get container status \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": rpc error: code = NotFound desc = could not find container \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": container with ID starting with fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175 not found: ID does not exist" Mar 19 09:18:07.257910 master-0 kubenswrapper[4035]: I0319 09:18:07.257870 4035 scope.go:117] "RemoveContainer" containerID="51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b" Mar 19 09:18:07.258272 master-0 kubenswrapper[4035]: E0319 09:18:07.258230 4035 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": container with ID starting with 51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b not found: ID does not exist" containerID="51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b" Mar 19 09:18:07.258313 master-0 kubenswrapper[4035]: I0319 09:18:07.258271 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} err="failed to get container status \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": rpc error: code = NotFound desc = could not find container \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": container with ID starting with 51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b not found: ID does not exist" Mar 19 09:18:07.258313 master-0 kubenswrapper[4035]: I0319 09:18:07.258296 4035 scope.go:117] "RemoveContainer" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" Mar 19 09:18:07.258676 master-0 kubenswrapper[4035]: I0319 09:18:07.258627 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} err="failed to get container status \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": rpc error: code = NotFound desc = could not find container \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": container with ID starting with 6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c not found: ID does not exist" Mar 19 09:18:07.258725 master-0 kubenswrapper[4035]: I0319 09:18:07.258673 4035 scope.go:117] "RemoveContainer" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" Mar 19 09:18:07.258999 master-0 kubenswrapper[4035]: I0319 09:18:07.258962 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} err="failed to get container status \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": rpc error: code = NotFound desc = could not find container \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": container with ID starting with 9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7 not found: ID does not exist" Mar 19 09:18:07.258999 master-0 kubenswrapper[4035]: I0319 09:18:07.258986 4035 scope.go:117] "RemoveContainer" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" Mar 19 09:18:07.259289 master-0 kubenswrapper[4035]: I0319 09:18:07.259245 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} err="failed to get container status \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": rpc error: code = NotFound desc = could not find container \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": container with ID starting with 8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf not found: ID does not exist" Mar 19 09:18:07.259289 master-0 kubenswrapper[4035]: I0319 09:18:07.259278 4035 scope.go:117] "RemoveContainer" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" Mar 19 09:18:07.259622 master-0 kubenswrapper[4035]: I0319 09:18:07.259576 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} err="failed to get container status \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": rpc error: code = NotFound desc = could not find container \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": container with ID starting with dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c not found: ID does not exist" Mar 19 09:18:07.259622 master-0 kubenswrapper[4035]: I0319 09:18:07.259612 4035 scope.go:117] "RemoveContainer" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" Mar 19 09:18:07.259947 master-0 kubenswrapper[4035]: I0319 09:18:07.259909 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} err="failed to get container status \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": rpc error: code = NotFound desc = could not find container \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": container with ID starting with 33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f not found: ID does not exist" Mar 19 09:18:07.259947 master-0 kubenswrapper[4035]: I0319 09:18:07.259932 4035 scope.go:117] "RemoveContainer" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" Mar 19 09:18:07.260272 master-0 kubenswrapper[4035]: I0319 09:18:07.260215 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} err="failed to get container status \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": rpc error: code = NotFound desc = could not find container \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": container with ID starting with bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77 not found: ID does not exist" Mar 19 09:18:07.260272 master-0 kubenswrapper[4035]: I0319 09:18:07.260263 4035 scope.go:117] "RemoveContainer" containerID="96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" Mar 19 09:18:07.261122 master-0 kubenswrapper[4035]: I0319 09:18:07.260671 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} err="failed to get container status \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": rpc error: code = NotFound desc = could not find container \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": container with ID starting with 96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598 not found: ID does not exist" Mar 19 09:18:07.261122 master-0 kubenswrapper[4035]: I0319 09:18:07.260709 4035 scope.go:117] "RemoveContainer" containerID="fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" Mar 19 09:18:07.261122 master-0 kubenswrapper[4035]: I0319 09:18:07.261045 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} err="failed to get container status \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": rpc error: code = NotFound desc = could not find container \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": container with ID starting with fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175 not found: ID does not exist" Mar 19 09:18:07.261122 master-0 kubenswrapper[4035]: I0319 09:18:07.261062 4035 scope.go:117] "RemoveContainer" containerID="51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b" Mar 19 09:18:07.261519 master-0 kubenswrapper[4035]: I0319 09:18:07.261313 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} err="failed to get container status \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": rpc error: code = NotFound desc = could not find container \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": container with ID starting with 51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b not found: ID does not exist" Mar 19 09:18:07.261519 master-0 kubenswrapper[4035]: I0319 09:18:07.261344 4035 scope.go:117] "RemoveContainer" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" Mar 19 09:18:07.261788 master-0 kubenswrapper[4035]: I0319 09:18:07.261636 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} err="failed to get container status \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": rpc error: code = NotFound desc = could not find container \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": container with ID starting with 6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c not found: ID does not exist" Mar 19 09:18:07.261788 master-0 kubenswrapper[4035]: I0319 09:18:07.261664 4035 scope.go:117] "RemoveContainer" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" Mar 19 09:18:07.262005 master-0 kubenswrapper[4035]: I0319 09:18:07.261918 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} err="failed to get container status \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": rpc error: code = NotFound desc = could not find container \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": container with ID starting with 9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7 not found: ID does not exist" Mar 19 09:18:07.262005 master-0 kubenswrapper[4035]: I0319 09:18:07.261953 4035 scope.go:117] "RemoveContainer" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" Mar 19 09:18:07.262272 master-0 kubenswrapper[4035]: I0319 09:18:07.262199 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} err="failed to get container status \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": rpc error: code = NotFound desc = could not find container \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": container with ID starting with 8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf not found: ID does not exist" Mar 19 09:18:07.262272 master-0 kubenswrapper[4035]: I0319 09:18:07.262222 4035 scope.go:117] "RemoveContainer" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" Mar 19 09:18:07.262528 master-0 kubenswrapper[4035]: I0319 09:18:07.262447 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} err="failed to get container status \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": rpc error: code = NotFound desc = could not find container \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": container with ID starting with dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c not found: ID does not exist" Mar 19 09:18:07.262528 master-0 kubenswrapper[4035]: I0319 09:18:07.262479 4035 scope.go:117] "RemoveContainer" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" Mar 19 09:18:07.262987 master-0 kubenswrapper[4035]: I0319 09:18:07.262796 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} err="failed to get container status \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": rpc error: code = NotFound desc = could not find container \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": container with ID starting with 33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f not found: ID does not exist" Mar 19 09:18:07.262987 master-0 kubenswrapper[4035]: I0319 09:18:07.262849 4035 scope.go:117] "RemoveContainer" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" Mar 19 09:18:07.263212 master-0 kubenswrapper[4035]: I0319 09:18:07.263147 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} err="failed to get container status \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": rpc error: code = NotFound desc = could not find container \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": container with ID starting with bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77 not found: ID does not exist" Mar 19 09:18:07.263212 master-0 kubenswrapper[4035]: I0319 09:18:07.263178 4035 scope.go:117] "RemoveContainer" containerID="96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" Mar 19 09:18:07.263641 master-0 kubenswrapper[4035]: I0319 09:18:07.263407 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} err="failed to get container status \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": rpc error: code = NotFound desc = could not find container \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": container with ID starting with 96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598 not found: ID does not exist" Mar 19 09:18:07.263641 master-0 kubenswrapper[4035]: I0319 09:18:07.263461 4035 scope.go:117] "RemoveContainer" containerID="fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" Mar 19 09:18:07.264193 master-0 kubenswrapper[4035]: I0319 09:18:07.264022 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} err="failed to get container status \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": rpc error: code = NotFound desc = could not find container \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": container with ID starting with fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175 not found: ID does not exist" Mar 19 09:18:07.264193 master-0 kubenswrapper[4035]: I0319 09:18:07.264072 4035 scope.go:117] "RemoveContainer" containerID="51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b" Mar 19 09:18:07.264623 master-0 kubenswrapper[4035]: I0319 09:18:07.264567 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} err="failed to get container status \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": rpc error: code = NotFound desc = could not find container \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": container with ID starting with 51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b not found: ID does not exist" Mar 19 09:18:07.264623 master-0 kubenswrapper[4035]: I0319 09:18:07.264603 4035 scope.go:117] "RemoveContainer" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" Mar 19 09:18:07.265212 master-0 kubenswrapper[4035]: I0319 09:18:07.265129 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} err="failed to get container status \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": rpc error: code = NotFound desc = could not find container \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": container with ID starting with 6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c not found: ID does not exist" Mar 19 09:18:07.265212 master-0 kubenswrapper[4035]: I0319 09:18:07.265196 4035 scope.go:117] "RemoveContainer" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" Mar 19 09:18:07.265945 master-0 kubenswrapper[4035]: I0319 09:18:07.265514 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} err="failed to get container status \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": rpc error: code = NotFound desc = could not find container \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": container with ID starting with 9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7 not found: ID does not exist" Mar 19 09:18:07.265945 master-0 kubenswrapper[4035]: I0319 09:18:07.265535 4035 scope.go:117] "RemoveContainer" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" Mar 19 09:18:07.265945 master-0 kubenswrapper[4035]: I0319 09:18:07.265820 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} err="failed to get container status \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": rpc error: code = NotFound desc = could not find container \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": container with ID starting with 8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf not found: ID does not exist" Mar 19 09:18:07.265945 master-0 kubenswrapper[4035]: I0319 09:18:07.265851 4035 scope.go:117] "RemoveContainer" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" Mar 19 09:18:07.266272 master-0 kubenswrapper[4035]: I0319 09:18:07.266183 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} err="failed to get container status \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": rpc error: code = NotFound desc = could not find container \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": container with ID starting with dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c not found: ID does not exist" Mar 19 09:18:07.266272 master-0 kubenswrapper[4035]: I0319 09:18:07.266248 4035 scope.go:117] "RemoveContainer" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" Mar 19 09:18:07.266852 master-0 kubenswrapper[4035]: I0319 09:18:07.266807 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} err="failed to get container status \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": rpc error: code = NotFound desc = could not find container \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": container with ID starting with 33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f not found: ID does not exist" Mar 19 09:18:07.266852 master-0 kubenswrapper[4035]: I0319 09:18:07.266839 4035 scope.go:117] "RemoveContainer" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" Mar 19 09:18:07.267563 master-0 kubenswrapper[4035]: I0319 09:18:07.267296 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} err="failed to get container status \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": rpc error: code = NotFound desc = could not find container \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": container with ID starting with bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77 not found: ID does not exist" Mar 19 09:18:07.267563 master-0 kubenswrapper[4035]: I0319 09:18:07.267321 4035 scope.go:117] "RemoveContainer" containerID="96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598" Mar 19 09:18:07.267745 master-0 kubenswrapper[4035]: I0319 09:18:07.267715 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598"} err="failed to get container status \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": rpc error: code = NotFound desc = could not find container \"96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598\": container with ID starting with 96240f58e687c312695d43cc417af3a8d18beb988362aef5c8f93934fbeb2598 not found: ID does not exist" Mar 19 09:18:07.267885 master-0 kubenswrapper[4035]: I0319 09:18:07.267747 4035 scope.go:117] "RemoveContainer" containerID="fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175" Mar 19 09:18:07.268898 master-0 kubenswrapper[4035]: I0319 09:18:07.268831 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175"} err="failed to get container status \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": rpc error: code = NotFound desc = could not find container \"fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175\": container with ID starting with fd4edb5edce1ebff51f62dd05c61c00eac3e7ce08f3b37960c76e4ec3a363175 not found: ID does not exist" Mar 19 09:18:07.268898 master-0 kubenswrapper[4035]: I0319 09:18:07.268867 4035 scope.go:117] "RemoveContainer" containerID="51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b" Mar 19 09:18:07.269667 master-0 kubenswrapper[4035]: I0319 09:18:07.269266 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b"} err="failed to get container status \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": rpc error: code = NotFound desc = could not find container \"51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b\": container with ID starting with 51c1edeacefe08cad868b731d5a8090f6177cba93ccbea4bf87327aaa2ce6d4b not found: ID does not exist" Mar 19 09:18:07.269667 master-0 kubenswrapper[4035]: I0319 09:18:07.269298 4035 scope.go:117] "RemoveContainer" containerID="6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c" Mar 19 09:18:07.269941 master-0 kubenswrapper[4035]: I0319 09:18:07.269886 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c"} err="failed to get container status \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": rpc error: code = NotFound desc = could not find container \"6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c\": container with ID starting with 6075266ff8a43a169c6af06a207890e78ee7822625b068cef9a8143cda5fa68c not found: ID does not exist" Mar 19 09:18:07.269941 master-0 kubenswrapper[4035]: I0319 09:18:07.269935 4035 scope.go:117] "RemoveContainer" containerID="9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7" Mar 19 09:18:07.270437 master-0 kubenswrapper[4035]: I0319 09:18:07.270280 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7"} err="failed to get container status \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": rpc error: code = NotFound desc = could not find container \"9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7\": container with ID starting with 9e779fe99e595ea8b9f5929382228367486688cedb8561de7def0cd1243204f7 not found: ID does not exist" Mar 19 09:18:07.270437 master-0 kubenswrapper[4035]: I0319 09:18:07.270298 4035 scope.go:117] "RemoveContainer" containerID="8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf" Mar 19 09:18:07.270722 master-0 kubenswrapper[4035]: I0319 09:18:07.270678 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf"} err="failed to get container status \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": rpc error: code = NotFound desc = could not find container \"8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf\": container with ID starting with 8d8ec843157aac2c5d1aaa1740d01764149852bed048e524472209256068dcbf not found: ID does not exist" Mar 19 09:18:07.270779 master-0 kubenswrapper[4035]: I0319 09:18:07.270712 4035 scope.go:117] "RemoveContainer" containerID="dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c" Mar 19 09:18:07.271569 master-0 kubenswrapper[4035]: I0319 09:18:07.271516 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c"} err="failed to get container status \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": rpc error: code = NotFound desc = could not find container \"dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c\": container with ID starting with dcc288a7cda0f9fc417d22b0d5bda1dc177c23957bcb368db2b4a831caceae7c not found: ID does not exist" Mar 19 09:18:07.271569 master-0 kubenswrapper[4035]: I0319 09:18:07.271551 4035 scope.go:117] "RemoveContainer" containerID="33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f" Mar 19 09:18:07.271938 master-0 kubenswrapper[4035]: I0319 09:18:07.271864 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f"} err="failed to get container status \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": rpc error: code = NotFound desc = could not find container \"33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f\": container with ID starting with 33b081d81cadb79bcf2d4ae9283d7084b6588bf4d9c95ef01cacc7b4ca11478f not found: ID does not exist" Mar 19 09:18:07.271938 master-0 kubenswrapper[4035]: I0319 09:18:07.271922 4035 scope.go:117] "RemoveContainer" containerID="bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77" Mar 19 09:18:07.272481 master-0 kubenswrapper[4035]: I0319 09:18:07.272442 4035 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77"} err="failed to get container status \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": rpc error: code = NotFound desc = could not find container \"bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77\": container with ID starting with bdcdbd00696ceaf36bc0c290a39113b0f24c5d33116f2ff01b91a21f86db6a77 not found: ID does not exist" Mar 19 09:18:07.340276 master-0 kubenswrapper[4035]: I0319 09:18:07.340192 4035 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da90d03-46bd-41be-9224-0c63b31c535c" path="/var/lib/kubelet/pods/4da90d03-46bd-41be-9224-0c63b31c535c/volumes" Mar 19 09:18:08.094416 master-0 kubenswrapper[4035]: I0319 09:18:08.094320 4035 generic.go:334] "Generic (PLEG): container finished" podID="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" containerID="cfed02ef0a3bee4084b5a5748407cbaeafff5b6fc759f0c7f9bdc76ec5af9ce1" exitCode=0 Mar 19 09:18:08.095180 master-0 kubenswrapper[4035]: I0319 09:18:08.094446 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerDied","Data":"cfed02ef0a3bee4084b5a5748407cbaeafff5b6fc759f0c7f9bdc76ec5af9ce1"} Mar 19 09:18:08.095180 master-0 kubenswrapper[4035]: I0319 09:18:08.094499 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334"} Mar 19 09:18:08.333695 master-0 kubenswrapper[4035]: I0319 09:18:08.333637 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:08.333822 master-0 kubenswrapper[4035]: E0319 09:18:08.333771 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:08.333822 master-0 kubenswrapper[4035]: I0319 09:18:08.333639 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:08.334059 master-0 kubenswrapper[4035]: E0319 09:18:08.333877 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:09.103613 master-0 kubenswrapper[4035]: I0319 09:18:09.103528 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"d8423080d68496e9cb668aa433974025b2a07b5c47773ccd46a69f2d515dbad5"} Mar 19 09:18:09.103613 master-0 kubenswrapper[4035]: I0319 09:18:09.103592 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"a468d40e36a65d4dcd86678d78d1d9fe95b7c5f1405561860af00a8ed931389c"} Mar 19 09:18:09.103613 master-0 kubenswrapper[4035]: I0319 09:18:09.103607 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"59ffb1a17584d67b0954828f67e4e9c7b721a736919cf22f09f944366c2436dc"} Mar 19 09:18:09.103613 master-0 kubenswrapper[4035]: I0319 09:18:09.103619 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"01f485b436cdcaa5b83cbc88ef747690ceec933d08d612f2ac01c05ec271f825"} Mar 19 09:18:09.104940 master-0 kubenswrapper[4035]: I0319 09:18:09.103632 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"c4307e0fd31fe83e16dc00625b1f40ab42533b08aa83e0de4b50db5b0c88917c"} Mar 19 09:18:09.104940 master-0 kubenswrapper[4035]: I0319 09:18:09.103644 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"e6b28ee47fd02ca5015643f1e9f6a7dd5d2208f19bca3a330d2dbaf557b8f38f"} Mar 19 09:18:10.334485 master-0 kubenswrapper[4035]: I0319 09:18:10.334416 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:10.334485 master-0 kubenswrapper[4035]: I0319 09:18:10.334462 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:10.335314 master-0 kubenswrapper[4035]: E0319 09:18:10.334626 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:10.335314 master-0 kubenswrapper[4035]: E0319 09:18:10.334714 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:11.113927 master-0 kubenswrapper[4035]: I0319 09:18:11.113867 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"078511e7f3fd80efe8394250f592a002365f4164e90b4baa45792667b13c2bd7"} Mar 19 09:18:11.251021 master-0 kubenswrapper[4035]: I0319 09:18:11.250966 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:11.251191 master-0 kubenswrapper[4035]: E0319 09:18:11.251132 4035 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:11.251267 master-0 kubenswrapper[4035]: E0319 09:18:11.251220 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:15.251198037 +0000 UTC m=+164.709812988 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:12.334287 master-0 kubenswrapper[4035]: I0319 09:18:12.334200 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:12.335003 master-0 kubenswrapper[4035]: E0319 09:18:12.334319 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:12.335003 master-0 kubenswrapper[4035]: I0319 09:18:12.334200 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:12.335003 master-0 kubenswrapper[4035]: E0319 09:18:12.334389 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:13.124535 master-0 kubenswrapper[4035]: I0319 09:18:13.123213 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" event={"ID":"cdcc18f9-66cf-45d9-965d-d0a57fcf285c","Type":"ContainerStarted","Data":"cd217aa1e228788af57c8e0eb6f1254ab97983e434cf32b7e1142dd8b819f092"} Mar 19 09:18:13.124535 master-0 kubenswrapper[4035]: I0319 09:18:13.123602 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:13.124535 master-0 kubenswrapper[4035]: I0319 09:18:13.123622 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:13.124535 master-0 kubenswrapper[4035]: I0319 09:18:13.123632 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:13.144003 master-0 kubenswrapper[4035]: I0319 09:18:13.143387 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:13.149531 master-0 kubenswrapper[4035]: I0319 09:18:13.148814 4035 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:13.152011 master-0 kubenswrapper[4035]: I0319 09:18:13.151100 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" podStartSLOduration=7.151083234 podStartE2EDuration="7.151083234s" podCreationTimestamp="2026-03-19 09:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:13.150481406 +0000 UTC m=+102.609096347" watchObservedRunningTime="2026-03-19 09:18:13.151083234 +0000 UTC m=+102.609698175" Mar 19 09:18:13.267642 master-0 kubenswrapper[4035]: I0319 09:18:13.267453 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:13.267845 master-0 kubenswrapper[4035]: E0319 09:18:13.267649 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 19 09:18:13.267845 master-0 kubenswrapper[4035]: E0319 09:18:13.267671 4035 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 19 09:18:13.267845 master-0 kubenswrapper[4035]: E0319 09:18:13.267684 4035 projected.go:194] Error preparing data for projected volume kube-api-access-k6t9w for pod openshift-network-diagnostics/network-check-target-lql9l: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:18:13.267845 master-0 kubenswrapper[4035]: E0319 09:18:13.267735 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w podName:6cc45721-c05b-4161-91d9-d65cf6ec61d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:45.267718049 +0000 UTC m=+134.726332990 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-k6t9w" (UniqueName: "kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w") pod "network-check-target-lql9l" (UID: "6cc45721-c05b-4161-91d9-d65cf6ec61d4") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 19 09:18:14.102657 master-0 kubenswrapper[4035]: I0319 09:18:14.100674 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lflg7"] Mar 19 09:18:14.102657 master-0 kubenswrapper[4035]: I0319 09:18:14.101036 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:14.102657 master-0 kubenswrapper[4035]: E0319 09:18:14.101109 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:14.102657 master-0 kubenswrapper[4035]: I0319 09:18:14.102002 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-lql9l"] Mar 19 09:18:14.102657 master-0 kubenswrapper[4035]: I0319 09:18:14.102148 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:14.102657 master-0 kubenswrapper[4035]: E0319 09:18:14.102209 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:16.333477 master-0 kubenswrapper[4035]: I0319 09:18:16.333425 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:16.333953 master-0 kubenswrapper[4035]: I0319 09:18:16.333441 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:16.333953 master-0 kubenswrapper[4035]: E0319 09:18:16.333584 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-lql9l" podUID="6cc45721-c05b-4161-91d9-d65cf6ec61d4" Mar 19 09:18:16.333953 master-0 kubenswrapper[4035]: E0319 09:18:16.333655 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:18:17.863195 master-0 kubenswrapper[4035]: I0319 09:18:17.863127 4035 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 19 09:18:17.863969 master-0 kubenswrapper[4035]: I0319 09:18:17.863313 4035 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 19 09:18:17.900841 master-0 kubenswrapper[4035]: I0319 09:18:17.900794 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd"] Mar 19 09:18:17.901869 master-0 kubenswrapper[4035]: I0319 09:18:17.901365 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:17.903564 master-0 kubenswrapper[4035]: I0319 09:18:17.903523 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 09:18:17.904996 master-0 kubenswrapper[4035]: I0319 09:18:17.904932 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:18:17.905366 master-0 kubenswrapper[4035]: I0319 09:18:17.905335 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 09:18:17.908304 master-0 kubenswrapper[4035]: I0319 09:18:17.908261 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m"] Mar 19 09:18:17.908714 master-0 kubenswrapper[4035]: I0319 09:18:17.908679 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:17.909037 master-0 kubenswrapper[4035]: I0319 09:18:17.908992 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9"] Mar 19 09:18:17.909411 master-0 kubenswrapper[4035]: I0319 09:18:17.909382 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:17.909859 master-0 kubenswrapper[4035]: I0319 09:18:17.909829 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh"] Mar 19 09:18:17.910103 master-0 kubenswrapper[4035]: I0319 09:18:17.910078 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:17.910358 master-0 kubenswrapper[4035]: I0319 09:18:17.910327 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65"] Mar 19 09:18:17.910711 master-0 kubenswrapper[4035]: I0319 09:18:17.910685 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:17.912761 master-0 kubenswrapper[4035]: W0319 09:18:17.912725 4035 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'master-0' and this object Mar 19 09:18:17.912830 master-0 kubenswrapper[4035]: W0319 09:18:17.912757 4035 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'master-0' and this object Mar 19 09:18:17.912830 master-0 kubenswrapper[4035]: E0319 09:18:17.912771 4035 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 19 09:18:17.912830 master-0 kubenswrapper[4035]: E0319 09:18:17.912799 4035 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 19 09:18:17.912830 master-0 kubenswrapper[4035]: W0319 09:18:17.912816 4035 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: configmaps "openshift-apiserver-operator-config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'master-0' and this object Mar 19 09:18:17.912830 master-0 kubenswrapper[4035]: E0319 09:18:17.912828 4035 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-apiserver-operator-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 19 09:18:17.912999 master-0 kubenswrapper[4035]: W0319 09:18:17.912869 4035 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: secrets "openshift-apiserver-operator-serving-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver-operator": no relationship found between node 'master-0' and this object Mar 19 09:18:17.912999 master-0 kubenswrapper[4035]: E0319 09:18:17.912888 4035 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-operator-serving-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 19 09:18:17.913085 master-0 kubenswrapper[4035]: W0319 09:18:17.913066 4035 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-node-tuning-operator": no relationship found between node 'master-0' and this object Mar 19 09:18:17.913124 master-0 kubenswrapper[4035]: E0319 09:18:17.913088 4035 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-node-tuning-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 19 09:18:17.913596 master-0 kubenswrapper[4035]: I0319 09:18:17.913569 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5"] Mar 19 09:18:17.913923 master-0 kubenswrapper[4035]: W0319 09:18:17.913894 4035 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls": failed to list *v1.Secret: secrets "node-tuning-operator-tls" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-node-tuning-operator": no relationship found between node 'master-0' and this object Mar 19 09:18:17.913976 master-0 kubenswrapper[4035]: E0319 09:18:17.913941 4035 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"node-tuning-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-tuning-operator-tls\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-node-tuning-operator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 19 09:18:17.914009 master-0 kubenswrapper[4035]: I0319 09:18:17.913996 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:17.916261 master-0 kubenswrapper[4035]: I0319 09:18:17.916221 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 09:18:17.916261 master-0 kubenswrapper[4035]: I0319 09:18:17.916239 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.916589 master-0 kubenswrapper[4035]: I0319 09:18:17.916559 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.916693 master-0 kubenswrapper[4035]: I0319 09:18:17.916660 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 09:18:17.916693 master-0 kubenswrapper[4035]: I0319 09:18:17.916678 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 09:18:17.916794 master-0 kubenswrapper[4035]: I0319 09:18:17.916775 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 09:18:17.916835 master-0 kubenswrapper[4035]: I0319 09:18:17.916824 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 09:18:17.917004 master-0 kubenswrapper[4035]: I0319 09:18:17.916970 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.917004 master-0 kubenswrapper[4035]: I0319 09:18:17.916782 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 09:18:17.917273 master-0 kubenswrapper[4035]: I0319 09:18:17.917248 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 09:18:17.917354 master-0 kubenswrapper[4035]: I0319 09:18:17.917333 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 09:18:17.917445 master-0 kubenswrapper[4035]: I0319 09:18:17.917413 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 09:18:17.918415 master-0 kubenswrapper[4035]: I0319 09:18:17.917698 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 09:18:17.918415 master-0 kubenswrapper[4035]: I0319 09:18:17.917997 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72"] Mar 19 09:18:17.918483 master-0 kubenswrapper[4035]: I0319 09:18:17.918420 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:17.918763 master-0 kubenswrapper[4035]: I0319 09:18:17.918735 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr"] Mar 19 09:18:17.926067 master-0 kubenswrapper[4035]: I0319 09:18:17.926012 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:17.926209 master-0 kubenswrapper[4035]: I0319 09:18:17.926073 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.936224 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.936447 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.936493 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.936245 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.936932 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.938353 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.938824 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-h4zrl"] Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.938871 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 09:18:17.939572 master-0 kubenswrapper[4035]: I0319 09:18:17.939140 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 09:18:17.950839 master-0 kubenswrapper[4035]: I0319 09:18:17.950798 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:17.953586 master-0 kubenswrapper[4035]: I0319 09:18:17.953533 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 09:18:17.953923 master-0 kubenswrapper[4035]: I0319 09:18:17.953905 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg"] Mar 19 09:18:17.954259 master-0 kubenswrapper[4035]: I0319 09:18:17.954219 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 09:18:17.954348 master-0 kubenswrapper[4035]: I0319 09:18:17.954336 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc"] Mar 19 09:18:17.954653 master-0 kubenswrapper[4035]: I0319 09:18:17.954630 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj"] Mar 19 09:18:17.954862 master-0 kubenswrapper[4035]: I0319 09:18:17.954832 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:17.955066 master-0 kubenswrapper[4035]: I0319 09:18:17.955031 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:17.955185 master-0 kubenswrapper[4035]: I0319 09:18:17.955171 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:17.955266 master-0 kubenswrapper[4035]: I0319 09:18:17.955243 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.956120 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-k89rz"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.956398 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.956410 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.956814 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-stct6"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.957340 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.959422 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.959738 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.960010 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.960303 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.960474 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.960858 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.961471 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.961893 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x"] Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.962201 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:17.963781 master-0 kubenswrapper[4035]: I0319 09:18:17.962581 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:17.964335 master-0 kubenswrapper[4035]: I0319 09:18:17.964114 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5"] Mar 19 09:18:17.964667 master-0 kubenswrapper[4035]: I0319 09:18:17.964621 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w"] Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.964880 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb"] Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.965269 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.965479 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.965586 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.965778 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.965802 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.965885 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.965921 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.966049 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.966167 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.966662 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 09:18:17.966944 master-0 kubenswrapper[4035]: I0319 09:18:17.966766 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 09:18:17.967348 master-0 kubenswrapper[4035]: I0319 09:18:17.967152 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 09:18:17.967348 master-0 kubenswrapper[4035]: I0319 09:18:17.967249 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 09:18:17.967348 master-0 kubenswrapper[4035]: I0319 09:18:17.967298 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 09:18:17.967471 master-0 kubenswrapper[4035]: I0319 09:18:17.967446 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 09:18:17.967509 master-0 kubenswrapper[4035]: I0319 09:18:17.967478 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 09:18:17.968395 master-0 kubenswrapper[4035]: I0319 09:18:17.967631 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 09:18:17.968395 master-0 kubenswrapper[4035]: I0319 09:18:17.967863 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.968395 master-0 kubenswrapper[4035]: I0319 09:18:17.967446 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 09:18:17.968395 master-0 kubenswrapper[4035]: I0319 09:18:17.967794 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 09:18:17.968395 master-0 kubenswrapper[4035]: I0319 09:18:17.968098 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 09:18:17.969270 master-0 kubenswrapper[4035]: I0319 09:18:17.968644 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.969479 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.969617 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.969830 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.969841 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.969916 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.969991 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b"] Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.970062 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.970110 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.970649 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw"] Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.971129 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.971411 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:17.974591 master-0 kubenswrapper[4035]: I0319 09:18:17.974556 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 09:18:17.975242 master-0 kubenswrapper[4035]: I0319 09:18:17.974707 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 09:18:17.975242 master-0 kubenswrapper[4035]: I0319 09:18:17.974806 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 09:18:17.975242 master-0 kubenswrapper[4035]: I0319 09:18:17.974981 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 09:18:17.975493 master-0 kubenswrapper[4035]: I0319 09:18:17.975249 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 09:18:17.975493 master-0 kubenswrapper[4035]: I0319 09:18:17.975405 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.975493 master-0 kubenswrapper[4035]: I0319 09:18:17.975483 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 09:18:17.975780 master-0 kubenswrapper[4035]: I0319 09:18:17.975590 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.975780 master-0 kubenswrapper[4035]: I0319 09:18:17.975679 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 09:18:17.979267 master-0 kubenswrapper[4035]: I0319 09:18:17.978343 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 09:18:17.979267 master-0 kubenswrapper[4035]: I0319 09:18:17.978908 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 09:18:17.979267 master-0 kubenswrapper[4035]: I0319 09:18:17.979197 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 09:18:17.979453 master-0 kubenswrapper[4035]: I0319 09:18:17.979357 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 09:18:17.979482 master-0 kubenswrapper[4035]: I0319 09:18:17.979466 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:18:17.981616 master-0 kubenswrapper[4035]: I0319 09:18:17.980522 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 09:18:17.981616 master-0 kubenswrapper[4035]: I0319 09:18:17.980707 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.981616 master-0 kubenswrapper[4035]: I0319 09:18:17.980933 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 09:18:17.981616 master-0 kubenswrapper[4035]: I0319 09:18:17.981134 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm"] Mar 19 09:18:17.982163 master-0 kubenswrapper[4035]: I0319 09:18:17.981868 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 09:18:17.982163 master-0 kubenswrapper[4035]: I0319 09:18:17.981887 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 09:18:17.982163 master-0 kubenswrapper[4035]: I0319 09:18:17.982012 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 09:18:17.982163 master-0 kubenswrapper[4035]: I0319 09:18:17.982081 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 09:18:17.982314 master-0 kubenswrapper[4035]: I0319 09:18:17.982250 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:17.984604 master-0 kubenswrapper[4035]: I0319 09:18:17.982579 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 09:18:17.984604 master-0 kubenswrapper[4035]: I0319 09:18:17.982634 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 09:18:17.994460 master-0 kubenswrapper[4035]: I0319 09:18:17.985106 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 09:18:17.994460 master-0 kubenswrapper[4035]: I0319 09:18:17.985712 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 09:18:17.999713 master-0 kubenswrapper[4035]: I0319 09:18:17.998406 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 09:18:17.999713 master-0 kubenswrapper[4035]: I0319 09:18:17.998625 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 09:18:17.999713 master-0 kubenswrapper[4035]: I0319 09:18:17.998879 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 09:18:17.999713 master-0 kubenswrapper[4035]: I0319 09:18:17.999107 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 09:18:17.999713 master-0 kubenswrapper[4035]: I0319 09:18:17.999198 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 09:18:17.999713 master-0 kubenswrapper[4035]: I0319 09:18:17.999291 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 09:18:17.999713 master-0 kubenswrapper[4035]: I0319 09:18:17.999386 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 09:18:18.003518 master-0 kubenswrapper[4035]: I0319 09:18:18.002311 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 09:18:18.003518 master-0 kubenswrapper[4035]: I0319 09:18:18.002887 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd"] Mar 19 09:18:18.003518 master-0 kubenswrapper[4035]: I0319 09:18:18.003092 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m"] Mar 19 09:18:18.003801 master-0 kubenswrapper[4035]: I0319 09:18:18.003773 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85vjd\" (UniqueName: \"kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.003867 master-0 kubenswrapper[4035]: I0319 09:18:18.003809 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.003867 master-0 kubenswrapper[4035]: I0319 09:18:18.003840 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.003867 master-0 kubenswrapper[4035]: I0319 09:18:18.003864 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.003976 master-0 kubenswrapper[4035]: I0319 09:18:18.003885 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.003976 master-0 kubenswrapper[4035]: I0319 09:18:18.003907 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.003976 master-0 kubenswrapper[4035]: I0319 09:18:18.003931 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:18.003976 master-0 kubenswrapper[4035]: I0319 09:18:18.003951 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.003976 master-0 kubenswrapper[4035]: I0319 09:18:18.003974 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xjhk\" (UniqueName: \"kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.004150 master-0 kubenswrapper[4035]: I0319 09:18:18.003996 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47plx\" (UniqueName: \"kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:18.004150 master-0 kubenswrapper[4035]: I0319 09:18:18.004016 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.004150 master-0 kubenswrapper[4035]: I0319 09:18:18.004041 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.004150 master-0 kubenswrapper[4035]: I0319 09:18:18.004064 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2svkc\" (UniqueName: \"kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:18.004150 master-0 kubenswrapper[4035]: I0319 09:18:18.004084 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:18.004150 master-0 kubenswrapper[4035]: I0319 09:18:18.004123 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:18.004150 master-0 kubenswrapper[4035]: I0319 09:18:18.004147 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbzvl\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004171 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004193 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004218 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004240 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004261 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004282 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004304 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8bm4\" (UniqueName: \"kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004328 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004349 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.004392 master-0 kubenswrapper[4035]: I0319 09:18:18.004377 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c654s\" (UniqueName: \"kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004402 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004425 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004449 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004472 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tll8k\" (UniqueName: \"kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004495 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004520 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdvd\" (UniqueName: \"kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004562 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004586 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004626 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004669 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5"] Mar 19 09:18:18.004702 master-0 kubenswrapper[4035]: I0319 09:18:18.004692 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.005006 master-0 kubenswrapper[4035]: I0319 09:18:18.004790 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.005006 master-0 kubenswrapper[4035]: I0319 09:18:18.004865 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.005006 master-0 kubenswrapper[4035]: I0319 09:18:18.004909 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.005006 master-0 kubenswrapper[4035]: I0319 09:18:18.004972 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.005006 master-0 kubenswrapper[4035]: I0319 09:18:18.005004 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnxk9\" (UniqueName: \"kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.005134 master-0 kubenswrapper[4035]: I0319 09:18:18.005047 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbktm\" (UniqueName: \"kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm\") pod \"csi-snapshot-controller-operator-5f5d689c6b-d89zz\" (UID: \"43fca1a4-4fa7-4a43-b9c4-7f50a8737643\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:18:18.005134 master-0 kubenswrapper[4035]: I0319 09:18:18.005093 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4n26\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.005134 master-0 kubenswrapper[4035]: I0319 09:18:18.005113 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 09:18:18.005134 master-0 kubenswrapper[4035]: I0319 09:18:18.005117 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvnp7\" (UniqueName: \"kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.005239 master-0 kubenswrapper[4035]: I0319 09:18:18.005140 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.005239 master-0 kubenswrapper[4035]: I0319 09:18:18.005158 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8b7s\" (UniqueName: \"kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.005239 master-0 kubenswrapper[4035]: I0319 09:18:18.005190 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hccqk\" (UniqueName: \"kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.005315 master-0 kubenswrapper[4035]: I0319 09:18:18.005250 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnjq\" (UniqueName: \"kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.005315 master-0 kubenswrapper[4035]: I0319 09:18:18.005283 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.005315 master-0 kubenswrapper[4035]: I0319 09:18:18.005304 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh"] Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005335 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005390 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005441 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005483 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005537 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005589 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h925l\" (UniqueName: \"kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005627 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005656 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005680 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7thvr\" (UniqueName: \"kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005702 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005725 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005745 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:18.005808 master-0 kubenswrapper[4035]: I0319 09:18:18.005783 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.005840 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.005868 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.005905 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8cg\" (UniqueName: \"kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.005950 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.006016 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.006044 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.006066 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.006089 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.006111 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.006132 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2hfh\" (UniqueName: \"kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.006149 master-0 kubenswrapper[4035]: I0319 09:18:18.006151 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7rj\" (UniqueName: \"kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006185 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr"] Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006185 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006218 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006238 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006268 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006285 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr4bl\" (UniqueName: \"kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006301 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006318 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7plsz\" (UniqueName: \"kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006333 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006347 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.006422 master-0 kubenswrapper[4035]: I0319 09:18:18.006362 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.008483 master-0 kubenswrapper[4035]: I0319 09:18:18.007618 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc"] Mar 19 09:18:18.009824 master-0 kubenswrapper[4035]: I0319 09:18:18.009761 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72"] Mar 19 09:18:18.010406 master-0 kubenswrapper[4035]: I0319 09:18:18.010378 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf"] Mar 19 09:18:18.014226 master-0 kubenswrapper[4035]: I0319 09:18:18.011685 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj"] Mar 19 09:18:18.014226 master-0 kubenswrapper[4035]: I0319 09:18:18.013518 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9"] Mar 19 09:18:18.014226 master-0 kubenswrapper[4035]: I0319 09:18:18.013642 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-k89rz"] Mar 19 09:18:18.015276 master-0 kubenswrapper[4035]: I0319 09:18:18.015105 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt"] Mar 19 09:18:18.016598 master-0 kubenswrapper[4035]: I0319 09:18:18.016018 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd"] Mar 19 09:18:18.017778 master-0 kubenswrapper[4035]: I0319 09:18:18.016868 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-p9bbz"] Mar 19 09:18:18.017778 master-0 kubenswrapper[4035]: I0319 09:18:18.017494 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.018829 master-0 kubenswrapper[4035]: I0319 09:18:18.018799 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5"] Mar 19 09:18:18.019195 master-0 kubenswrapper[4035]: I0319 09:18:18.019175 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 09:18:18.019743 master-0 kubenswrapper[4035]: I0319 09:18:18.019696 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b"] Mar 19 09:18:18.021260 master-0 kubenswrapper[4035]: I0319 09:18:18.021069 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65"] Mar 19 09:18:18.021619 master-0 kubenswrapper[4035]: I0319 09:18:18.021600 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-h4zrl"] Mar 19 09:18:18.022658 master-0 kubenswrapper[4035]: I0319 09:18:18.022521 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg"] Mar 19 09:18:18.024480 master-0 kubenswrapper[4035]: I0319 09:18:18.023954 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk"] Mar 19 09:18:18.024695 master-0 kubenswrapper[4035]: I0319 09:18:18.024652 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb"] Mar 19 09:18:18.025719 master-0 kubenswrapper[4035]: I0319 09:18:18.025668 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x"] Mar 19 09:18:18.026779 master-0 kubenswrapper[4035]: I0319 09:18:18.026643 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw"] Mar 19 09:18:18.027803 master-0 kubenswrapper[4035]: I0319 09:18:18.027741 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz"] Mar 19 09:18:18.029285 master-0 kubenswrapper[4035]: I0319 09:18:18.029249 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm"] Mar 19 09:18:18.030971 master-0 kubenswrapper[4035]: I0319 09:18:18.030949 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-stct6"] Mar 19 09:18:18.032186 master-0 kubenswrapper[4035]: I0319 09:18:18.032142 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w"] Mar 19 09:18:18.106700 master-0 kubenswrapper[4035]: I0319 09:18:18.106578 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.106700 master-0 kubenswrapper[4035]: I0319 09:18:18.106621 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svkc\" (UniqueName: \"kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:18.106887 master-0 kubenswrapper[4035]: I0319 09:18:18.106804 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:18.106932 master-0 kubenswrapper[4035]: E0319 09:18:18.106906 4035 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:18.107024 master-0 kubenswrapper[4035]: E0319 09:18:18.106989 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.606968914 +0000 UTC m=+108.065583855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:18.107084 master-0 kubenswrapper[4035]: I0319 09:18:18.107023 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:18.107084 master-0 kubenswrapper[4035]: I0319 09:18:18.107057 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbzvl\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.107084 master-0 kubenswrapper[4035]: I0319 09:18:18.107085 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.107203 master-0 kubenswrapper[4035]: I0319 09:18:18.107102 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.107203 master-0 kubenswrapper[4035]: I0319 09:18:18.107120 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.107203 master-0 kubenswrapper[4035]: I0319 09:18:18.107140 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.107203 master-0 kubenswrapper[4035]: I0319 09:18:18.107155 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.107203 master-0 kubenswrapper[4035]: I0319 09:18:18.107170 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.107203 master-0 kubenswrapper[4035]: I0319 09:18:18.107188 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8bm4\" (UniqueName: \"kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.107203 master-0 kubenswrapper[4035]: I0319 09:18:18.107206 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.107480 master-0 kubenswrapper[4035]: I0319 09:18:18.107222 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.107480 master-0 kubenswrapper[4035]: I0319 09:18:18.107239 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c654s\" (UniqueName: \"kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.107480 master-0 kubenswrapper[4035]: I0319 09:18:18.107259 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.107480 master-0 kubenswrapper[4035]: I0319 09:18:18.107279 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:18.107480 master-0 kubenswrapper[4035]: I0319 09:18:18.107297 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.107706 master-0 kubenswrapper[4035]: E0319 09:18:18.107504 4035 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:18.107706 master-0 kubenswrapper[4035]: E0319 09:18:18.107578 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.607557141 +0000 UTC m=+108.066172082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:18.108461 master-0 kubenswrapper[4035]: I0319 09:18:18.108323 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.108461 master-0 kubenswrapper[4035]: I0319 09:18:18.108389 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.108611 master-0 kubenswrapper[4035]: I0319 09:18:18.108590 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.109403 master-0 kubenswrapper[4035]: I0319 09:18:18.109370 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.109476 master-0 kubenswrapper[4035]: I0319 09:18:18.109416 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll8k\" (UniqueName: \"kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:18.109476 master-0 kubenswrapper[4035]: I0319 09:18:18.109444 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.109476 master-0 kubenswrapper[4035]: I0319 09:18:18.109468 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdvd\" (UniqueName: \"kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.109610 master-0 kubenswrapper[4035]: I0319 09:18:18.109492 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.109610 master-0 kubenswrapper[4035]: I0319 09:18:18.109516 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.109610 master-0 kubenswrapper[4035]: I0319 09:18:18.109580 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.109610 master-0 kubenswrapper[4035]: I0319 09:18:18.109606 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.109767 master-0 kubenswrapper[4035]: I0319 09:18:18.109633 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.109767 master-0 kubenswrapper[4035]: I0319 09:18:18.109660 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.109767 master-0 kubenswrapper[4035]: E0319 09:18:18.109728 4035 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:18.109767 master-0 kubenswrapper[4035]: E0319 09:18:18.109768 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.609753904 +0000 UTC m=+108.068368845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:18.110253 master-0 kubenswrapper[4035]: I0319 09:18:18.110211 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.110341 master-0 kubenswrapper[4035]: I0319 09:18:18.110320 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.111082 master-0 kubenswrapper[4035]: I0319 09:18:18.111047 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.111159 master-0 kubenswrapper[4035]: I0319 09:18:18.111110 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.111207 master-0 kubenswrapper[4035]: I0319 09:18:18.111145 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnxk9\" (UniqueName: \"kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.111272 master-0 kubenswrapper[4035]: I0319 09:18:18.111227 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8b7s\" (UniqueName: \"kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.111316 master-0 kubenswrapper[4035]: I0319 09:18:18.111262 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.111316 master-0 kubenswrapper[4035]: I0319 09:18:18.111289 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbktm\" (UniqueName: \"kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm\") pod \"csi-snapshot-controller-operator-5f5d689c6b-d89zz\" (UID: \"43fca1a4-4fa7-4a43-b9c4-7f50a8737643\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:18:18.111401 master-0 kubenswrapper[4035]: I0319 09:18:18.111347 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4n26\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.111401 master-0 kubenswrapper[4035]: I0319 09:18:18.111377 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnp7\" (UniqueName: \"kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.111479 master-0 kubenswrapper[4035]: E0319 09:18:18.111451 4035 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:18.111516 master-0 kubenswrapper[4035]: E0319 09:18:18.111498 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.611482664 +0000 UTC m=+108.070097705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:18.111611 master-0 kubenswrapper[4035]: I0319 09:18:18.111576 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.111681 master-0 kubenswrapper[4035]: I0319 09:18:18.111657 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hccqk\" (UniqueName: \"kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.111731 master-0 kubenswrapper[4035]: I0319 09:18:18.111694 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnjq\" (UniqueName: \"kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.111769 master-0 kubenswrapper[4035]: I0319 09:18:18.111757 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.111882 master-0 kubenswrapper[4035]: I0319 09:18:18.111837 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.111940 master-0 kubenswrapper[4035]: I0319 09:18:18.111890 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.112097 master-0 kubenswrapper[4035]: E0319 09:18:18.112075 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:18.112097 master-0 kubenswrapper[4035]: I0319 09:18:18.112089 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.112195 master-0 kubenswrapper[4035]: I0319 09:18:18.112114 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.112195 master-0 kubenswrapper[4035]: E0319 09:18:18.112117 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.612105822 +0000 UTC m=+108.070720843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:18.112195 master-0 kubenswrapper[4035]: I0319 09:18:18.112160 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.112308 master-0 kubenswrapper[4035]: I0319 09:18:18.112224 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.112308 master-0 kubenswrapper[4035]: I0319 09:18:18.112249 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.112308 master-0 kubenswrapper[4035]: I0319 09:18:18.112273 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h925l\" (UniqueName: \"kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.112308 master-0 kubenswrapper[4035]: I0319 09:18:18.112292 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:18.112308 master-0 kubenswrapper[4035]: I0319 09:18:18.112309 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7thvr\" (UniqueName: \"kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.112505 master-0 kubenswrapper[4035]: I0319 09:18:18.112328 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:18.112505 master-0 kubenswrapper[4035]: I0319 09:18:18.112346 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.112505 master-0 kubenswrapper[4035]: I0319 09:18:18.112364 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:18.112505 master-0 kubenswrapper[4035]: I0319 09:18:18.112501 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.112833 master-0 kubenswrapper[4035]: E0319 09:18:18.112808 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:18.112880 master-0 kubenswrapper[4035]: E0319 09:18:18.112862 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.612847563 +0000 UTC m=+108.071462504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:18.112880 master-0 kubenswrapper[4035]: I0319 09:18:18.112821 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.112978 master-0 kubenswrapper[4035]: E0319 09:18:18.112882 4035 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:18.112978 master-0 kubenswrapper[4035]: E0319 09:18:18.112927 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.612913555 +0000 UTC m=+108.071528496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:18.113076 master-0 kubenswrapper[4035]: I0319 09:18:18.112813 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.113195 master-0 kubenswrapper[4035]: I0319 09:18:18.113140 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.113195 master-0 kubenswrapper[4035]: I0319 09:18:18.113148 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:18.113195 master-0 kubenswrapper[4035]: I0319 09:18:18.113191 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.113402 master-0 kubenswrapper[4035]: I0319 09:18:18.113222 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.113402 master-0 kubenswrapper[4035]: I0319 09:18:18.113221 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.113402 master-0 kubenswrapper[4035]: E0319 09:18:18.113257 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:18.113402 master-0 kubenswrapper[4035]: E0319 09:18:18.113304 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.613289796 +0000 UTC m=+108.071904737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:18.113402 master-0 kubenswrapper[4035]: I0319 09:18:18.113347 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.113402 master-0 kubenswrapper[4035]: I0319 09:18:18.113398 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8cg\" (UniqueName: \"kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113441 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113477 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113500 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113527 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113572 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113596 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113627 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2hfh\" (UniqueName: \"kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113641 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.113663 master-0 kubenswrapper[4035]: I0319 09:18:18.113654 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7rj\" (UniqueName: \"kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113678 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113703 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113727 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113752 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113781 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113809 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr4bl\" (UniqueName: \"kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113836 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113861 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7plsz\" (UniqueName: \"kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113886 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113916 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113939 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113979 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85vjd\" (UniqueName: \"kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.113999 master-0 kubenswrapper[4035]: I0319 09:18:18.113978 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114083 4035 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58zw\" (UniqueName: \"kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114146 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114177 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114202 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114204 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114235 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114253 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114271 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114289 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114305 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjhk\" (UniqueName: \"kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114321 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47plx\" (UniqueName: \"kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114337 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114352 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114343 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: I0319 09:18:18.114354 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.114569 master-0 kubenswrapper[4035]: E0319 09:18:18.114468 4035 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:18.115116 master-0 kubenswrapper[4035]: E0319 09:18:18.114512 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.61450387 +0000 UTC m=+108.073118811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:18.115116 master-0 kubenswrapper[4035]: E0319 09:18:18.114515 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:18.115116 master-0 kubenswrapper[4035]: E0319 09:18:18.114570 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.614557312 +0000 UTC m=+108.073172353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:18.115116 master-0 kubenswrapper[4035]: E0319 09:18:18.114749 4035 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:18.115116 master-0 kubenswrapper[4035]: E0319 09:18:18.114793 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.614775109 +0000 UTC m=+108.073390050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:18.115116 master-0 kubenswrapper[4035]: E0319 09:18:18.114811 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:18.115116 master-0 kubenswrapper[4035]: E0319 09:18:18.114843 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.614834421 +0000 UTC m=+108.073449362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:18.115395 master-0 kubenswrapper[4035]: E0319 09:18:18.115186 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:18.115395 master-0 kubenswrapper[4035]: E0319 09:18:18.115229 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:18.615215322 +0000 UTC m=+108.073830363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:18.116406 master-0 kubenswrapper[4035]: I0319 09:18:18.115837 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.116406 master-0 kubenswrapper[4035]: I0319 09:18:18.116046 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.116406 master-0 kubenswrapper[4035]: I0319 09:18:18.116016 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.116406 master-0 kubenswrapper[4035]: I0319 09:18:18.116238 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.116861 master-0 kubenswrapper[4035]: I0319 09:18:18.116827 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.117913 master-0 kubenswrapper[4035]: I0319 09:18:18.117636 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.117913 master-0 kubenswrapper[4035]: I0319 09:18:18.117667 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.117913 master-0 kubenswrapper[4035]: I0319 09:18:18.117815 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:18.117913 master-0 kubenswrapper[4035]: I0319 09:18:18.117909 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.118072 master-0 kubenswrapper[4035]: I0319 09:18:18.117994 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.118226 master-0 kubenswrapper[4035]: I0319 09:18:18.118189 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.118226 master-0 kubenswrapper[4035]: I0319 09:18:18.118221 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.118362 master-0 kubenswrapper[4035]: I0319 09:18:18.118330 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.118972 master-0 kubenswrapper[4035]: I0319 09:18:18.118372 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.119080 master-0 kubenswrapper[4035]: I0319 09:18:18.119057 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.119723 master-0 kubenswrapper[4035]: I0319 09:18:18.119669 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.120297 master-0 kubenswrapper[4035]: I0319 09:18:18.120091 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.120297 master-0 kubenswrapper[4035]: I0319 09:18:18.120256 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.121077 master-0 kubenswrapper[4035]: I0319 09:18:18.121046 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.121293 master-0 kubenswrapper[4035]: I0319 09:18:18.121265 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.142306 master-0 kubenswrapper[4035]: I0319 09:18:18.142149 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svkc\" (UniqueName: \"kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:18.157617 master-0 kubenswrapper[4035]: I0319 09:18:18.157578 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.175584 master-0 kubenswrapper[4035]: I0319 09:18:18.175520 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c654s\" (UniqueName: \"kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.195027 master-0 kubenswrapper[4035]: I0319 09:18:18.194988 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.215076 master-0 kubenswrapper[4035]: I0319 09:18:18.215042 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.215680 master-0 kubenswrapper[4035]: I0319 09:18:18.215638 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.215729 master-0 kubenswrapper[4035]: I0319 09:18:18.215693 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t58zw\" (UniqueName: \"kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.215904 master-0 kubenswrapper[4035]: I0319 09:18:18.215877 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.216071 master-0 kubenswrapper[4035]: I0319 09:18:18.216047 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbzvl\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.216161 master-0 kubenswrapper[4035]: I0319 09:18:18.216134 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.227881 master-0 kubenswrapper[4035]: I0319 09:18:18.227846 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:18.235363 master-0 kubenswrapper[4035]: I0319 09:18:18.235335 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.258317 master-0 kubenswrapper[4035]: I0319 09:18:18.258276 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8bm4\" (UniqueName: \"kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.275416 master-0 kubenswrapper[4035]: I0319 09:18:18.275368 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdvd\" (UniqueName: \"kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.299737 master-0 kubenswrapper[4035]: I0319 09:18:18.299429 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.310015 master-0 kubenswrapper[4035]: I0319 09:18:18.308916 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:18.324447 master-0 kubenswrapper[4035]: I0319 09:18:18.324373 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbktm\" (UniqueName: \"kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm\") pod \"csi-snapshot-controller-operator-5f5d689c6b-d89zz\" (UID: \"43fca1a4-4fa7-4a43-b9c4-7f50a8737643\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:18:18.334380 master-0 kubenswrapper[4035]: I0319 09:18:18.334327 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:18.334753 master-0 kubenswrapper[4035]: I0319 09:18:18.334731 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:18.339552 master-0 kubenswrapper[4035]: I0319 09:18:18.339468 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnp7\" (UniqueName: \"kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.348583 master-0 kubenswrapper[4035]: I0319 09:18:18.348367 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:18.361415 master-0 kubenswrapper[4035]: I0319 09:18:18.361361 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll8k\" (UniqueName: \"kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:18.363431 master-0 kubenswrapper[4035]: I0319 09:18:18.363335 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:18.373347 master-0 kubenswrapper[4035]: I0319 09:18:18.372848 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:18.404797 master-0 kubenswrapper[4035]: I0319 09:18:18.404758 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd"] Mar 19 09:18:18.404916 master-0 kubenswrapper[4035]: I0319 09:18:18.404812 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnxk9\" (UniqueName: \"kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.413956 master-0 kubenswrapper[4035]: I0319 09:18:18.413877 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:18:18.422949 master-0 kubenswrapper[4035]: I0319 09:18:18.422875 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hccqk\" (UniqueName: \"kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.431172 master-0 kubenswrapper[4035]: W0319 09:18:18.431130 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46c7cde3_2cb4_4fa8_94ca_d5feff877da9.slice/crio-227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6 WatchSource:0}: Error finding container 227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6: Status 404 returned error can't find the container with id 227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6 Mar 19 09:18:18.443618 master-0 kubenswrapper[4035]: I0319 09:18:18.443429 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:18.463382 master-0 kubenswrapper[4035]: I0319 09:18:18.463344 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4n26\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.478831 master-0 kubenswrapper[4035]: I0319 09:18:18.478254 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:18.494826 master-0 kubenswrapper[4035]: I0319 09:18:18.494765 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.506713 master-0 kubenswrapper[4035]: I0319 09:18:18.506089 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnjq\" (UniqueName: \"kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.511299 master-0 kubenswrapper[4035]: I0319 09:18:18.511231 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh"] Mar 19 09:18:18.534904 master-0 kubenswrapper[4035]: I0319 09:18:18.529045 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7thvr\" (UniqueName: \"kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.540146 master-0 kubenswrapper[4035]: I0319 09:18:18.540091 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h925l\" (UniqueName: \"kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.565829 master-0 kubenswrapper[4035]: I0319 09:18:18.565732 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg"] Mar 19 09:18:18.566116 master-0 kubenswrapper[4035]: I0319 09:18:18.566079 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf"] Mar 19 09:18:18.576432 master-0 kubenswrapper[4035]: I0319 09:18:18.576394 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8cg\" (UniqueName: \"kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:18.581109 master-0 kubenswrapper[4035]: I0319 09:18:18.581070 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7rj\" (UniqueName: \"kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:18.585376 master-0 kubenswrapper[4035]: I0319 09:18:18.585345 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj"] Mar 19 09:18:18.586366 master-0 kubenswrapper[4035]: W0319 09:18:18.586324 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda67ae8dc_240d_4708_9139_1d49c601e552.slice/crio-8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2 WatchSource:0}: Error finding container 8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2: Status 404 returned error can't find the container with id 8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2 Mar 19 09:18:18.600238 master-0 kubenswrapper[4035]: I0319 09:18:18.600173 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2hfh\" (UniqueName: \"kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.614084 master-0 kubenswrapper[4035]: I0319 09:18:18.613568 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz"] Mar 19 09:18:18.621868 master-0 kubenswrapper[4035]: I0319 09:18:18.621835 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjhk\" (UniqueName: \"kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.626276 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: E0319 09:18:18.626431 4035 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.626836 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.626860 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.626893 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.626964 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627021 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627048 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627106 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627151 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627169 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627223 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627245 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: I0319 09:18:18.627271 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: E0319 09:18:18.627371 4035 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: E0319 09:18:18.627401 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627388276 +0000 UTC m=+109.086003217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:18.627936 master-0 kubenswrapper[4035]: E0319 09:18:18.627437 4035 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627454 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627447917 +0000 UTC m=+109.086062858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627466 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627461428 +0000 UTC m=+109.086076369 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627501 4035 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627520 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627514759 +0000 UTC m=+109.086129700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627567 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627585 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627580041 +0000 UTC m=+109.086194982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627704 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627729 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627722905 +0000 UTC m=+109.086337846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627765 4035 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627781 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627776267 +0000 UTC m=+109.086391208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627812 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627826 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.627821468 +0000 UTC m=+109.086436409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:18.630109 master-0 kubenswrapper[4035]: E0319 09:18:18.627857 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.627887 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.62788107 +0000 UTC m=+109.086496011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.627989 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.628003 4035 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.628039 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.628011 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.628004853 +0000 UTC m=+109.086619794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.628098 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.628080165 +0000 UTC m=+109.086695096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.628252 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.62824223 +0000 UTC m=+109.086857171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.628315 4035 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: E0319 09:18:18.628334 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.628328303 +0000 UTC m=+109.086943234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:18.630529 master-0 kubenswrapper[4035]: W0319 09:18:18.629796 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fca1a4_4fa7_4a43_b9c4_7f50a8737643.slice/crio-2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0 WatchSource:0}: Error finding container 2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0: Status 404 returned error can't find the container with id 2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0 Mar 19 09:18:18.635308 master-0 kubenswrapper[4035]: I0319 09:18:18.635267 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x"] Mar 19 09:18:18.638270 master-0 kubenswrapper[4035]: I0319 09:18:18.637904 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47plx\" (UniqueName: \"kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:18.638270 master-0 kubenswrapper[4035]: I0319 09:18:18.638068 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:18.648051 master-0 kubenswrapper[4035]: W0319 09:18:18.648009 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53bff8e4_bf60_4386_8905_49d43fd6c420.slice/crio-703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059 WatchSource:0}: Error finding container 703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059: Status 404 returned error can't find the container with id 703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059 Mar 19 09:18:18.672777 master-0 kubenswrapper[4035]: I0319 09:18:18.665115 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7plsz\" (UniqueName: \"kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:18.673182 master-0 kubenswrapper[4035]: I0319 09:18:18.673151 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5"] Mar 19 09:18:18.696431 master-0 kubenswrapper[4035]: I0319 09:18:18.696104 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:18.701224 master-0 kubenswrapper[4035]: I0319 09:18:18.701199 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85vjd\" (UniqueName: \"kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.703717 master-0 kubenswrapper[4035]: I0319 09:18:18.703052 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:18.722281 master-0 kubenswrapper[4035]: I0319 09:18:18.722246 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58zw\" (UniqueName: \"kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.723603 master-0 kubenswrapper[4035]: I0319 09:18:18.723413 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 09:18:18.744111 master-0 kubenswrapper[4035]: I0319 09:18:18.744037 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 09:18:18.764493 master-0 kubenswrapper[4035]: I0319 09:18:18.764258 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 09:18:18.790490 master-0 kubenswrapper[4035]: I0319 09:18:18.790406 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-h4zrl"] Mar 19 09:18:18.800187 master-0 kubenswrapper[4035]: W0319 09:18:18.800139 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e8c62b_97c3_4c0c_85d3_f660118831fd.slice/crio-f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337 WatchSource:0}: Error finding container f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337: Status 404 returned error can't find the container with id f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337 Mar 19 09:18:18.803870 master-0 kubenswrapper[4035]: I0319 09:18:18.803839 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 09:18:18.810564 master-0 kubenswrapper[4035]: I0319 09:18:18.810522 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8b7s\" (UniqueName: \"kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:18.810720 master-0 kubenswrapper[4035]: I0319 09:18:18.810699 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:18.822043 master-0 kubenswrapper[4035]: W0319 09:18:18.821962 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod672ad0aa_a0c5_4640_840d_3ffa02c55d62.slice/crio-79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037 WatchSource:0}: Error finding container 79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037: Status 404 returned error can't find the container with id 79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037 Mar 19 09:18:18.854059 master-0 kubenswrapper[4035]: I0319 09:18:18.854012 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk"] Mar 19 09:18:18.861438 master-0 kubenswrapper[4035]: W0319 09:18:18.861406 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b41b5_82d8_4d47_8350_79644a2c9360.slice/crio-8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d WatchSource:0}: Error finding container 8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d: Status 404 returned error can't find the container with id 8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d Mar 19 09:18:18.863737 master-0 kubenswrapper[4035]: I0319 09:18:18.863717 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:18:18.865473 master-0 kubenswrapper[4035]: I0319 09:18:18.863766 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:18.873721 master-0 kubenswrapper[4035]: I0319 09:18:18.873668 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt"] Mar 19 09:18:18.875799 master-0 kubenswrapper[4035]: W0319 09:18:18.875763 4035 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16a69ef7_2fc3_44e4_bc5c_ed50778ef9ff.slice/crio-eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071 WatchSource:0}: Error finding container eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071: Status 404 returned error can't find the container with id eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071 Mar 19 09:18:18.919462 master-0 kubenswrapper[4035]: I0319 09:18:18.919406 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:18.998273 master-0 kubenswrapper[4035]: I0319 09:18:18.996734 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65"] Mar 19 09:18:19.004873 master-0 kubenswrapper[4035]: I0319 09:18:19.004061 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 09:18:19.016388 master-0 kubenswrapper[4035]: I0319 09:18:19.016340 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr4bl\" (UniqueName: \"kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:19.081530 master-0 kubenswrapper[4035]: I0319 09:18:19.081494 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5"] Mar 19 09:18:19.091181 master-0 kubenswrapper[4035]: E0319 09:18:19.091135 4035 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:copy-catalogd-manifests,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85,Command:[/bin/sh],Args:[-c cp -a /openshift/manifests /operand-assets/catalogd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:operand-assets,ReadOnly:false,MountPath:/operand-assets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85vjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000360000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-olm-operator-67dcd4998-pqxp5_openshift-cluster-olm-operator(17e0cb4a-e776-4886-927e-ae446af7f234): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 19 09:18:19.092334 master-0 kubenswrapper[4035]: E0319 09:18:19.092293 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" podUID="17e0cb4a-e776-4886-927e-ae446af7f234" Mar 19 09:18:19.103271 master-0 kubenswrapper[4035]: I0319 09:18:19.103232 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 09:18:19.106102 master-0 kubenswrapper[4035]: E0319 09:18:19.106073 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:19.106216 master-0 kubenswrapper[4035]: E0319 09:18:19.106150 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.606132887 +0000 UTC m=+109.064747828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:19.108377 master-0 kubenswrapper[4035]: E0319 09:18:19.108354 4035 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 09:18:19.108496 master-0 kubenswrapper[4035]: E0319 09:18:19.108418 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert podName:70258988-8374-4aee-aaa2-be3c2e853062 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.608406423 +0000 UTC m=+109.067021374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert") pod "openshift-apiserver-operator-d65958b8-hrb9m" (UID: "70258988-8374-4aee-aaa2-be3c2e853062") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:18:19.113137 master-0 kubenswrapper[4035]: E0319 09:18:19.113109 4035 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:18:19.113236 master-0 kubenswrapper[4035]: E0319 09:18:19.113163 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config podName:70258988-8374-4aee-aaa2-be3c2e853062 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:19.61315133 +0000 UTC m=+109.071766281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config") pod "openshift-apiserver-operator-d65958b8-hrb9m" (UID: "70258988-8374-4aee-aaa2-be3c2e853062") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:18:19.192515 master-0 kubenswrapper[4035]: I0319 09:18:19.192472 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerStarted","Data":"f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337"} Mar 19 09:18:19.193338 master-0 kubenswrapper[4035]: I0319 09:18:19.193261 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" event={"ID":"a67ae8dc-240d-4708-9139-1d49c601e552","Type":"ContainerStarted","Data":"8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2"} Mar 19 09:18:19.194349 master-0 kubenswrapper[4035]: I0319 09:18:19.194073 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" event={"ID":"ca2f7cb3-8812-4fe3-83a5-61668ef87f99","Type":"ContainerStarted","Data":"eab66404c12034ae89f04e45ade44912e55d6fddf5edcf6fc585e549c9b0d555"} Mar 19 09:18:19.194828 master-0 kubenswrapper[4035]: I0319 09:18:19.194797 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" event={"ID":"fe1881fb-c670-442a-a092-c1eee6b7d5e5","Type":"ContainerStarted","Data":"2b99a9e40477692f9f0735d27cce4c13db8b181a07746d8c9e160e5b7831c820"} Mar 19 09:18:19.195800 master-0 kubenswrapper[4035]: I0319 09:18:19.195617 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerStarted","Data":"3c61e204454e38428fa04296fdaa0b86068d8df14b3972facff7186f87934a5b"} Mar 19 09:18:19.196409 master-0 kubenswrapper[4035]: I0319 09:18:19.196384 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-p9bbz" event={"ID":"672ad0aa-a0c5-4640-840d-3ffa02c55d62","Type":"ContainerStarted","Data":"79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037"} Mar 19 09:18:19.197299 master-0 kubenswrapper[4035]: I0319 09:18:19.197276 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" event={"ID":"53bff8e4-bf60-4386-8905-49d43fd6c420","Type":"ContainerStarted","Data":"703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059"} Mar 19 09:18:19.199082 master-0 kubenswrapper[4035]: I0319 09:18:19.198329 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" event={"ID":"46c7cde3-2cb4-4fa8-94ca-d5feff877da9","Type":"ContainerStarted","Data":"e9208fca3070b80809292873e901e7513b6e0cbe29792fde8a62dcde9ce791be"} Mar 19 09:18:19.199082 master-0 kubenswrapper[4035]: I0319 09:18:19.198352 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" event={"ID":"46c7cde3-2cb4-4fa8-94ca-d5feff877da9","Type":"ContainerStarted","Data":"227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6"} Mar 19 09:18:19.199875 master-0 kubenswrapper[4035]: I0319 09:18:19.199782 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" event={"ID":"525b41b5-82d8-4d47-8350-79644a2c9360","Type":"ContainerStarted","Data":"8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d"} Mar 19 09:18:19.200585 master-0 kubenswrapper[4035]: I0319 09:18:19.200396 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" event={"ID":"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4","Type":"ContainerStarted","Data":"e924b0646dc2650e31e1b4cadf6eac6293c32b11a283f47d90fa34c50c73d4f0"} Mar 19 09:18:19.201072 master-0 kubenswrapper[4035]: I0319 09:18:19.201050 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" event={"ID":"43fca1a4-4fa7-4a43-b9c4-7f50a8737643","Type":"ContainerStarted","Data":"2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0"} Mar 19 09:18:19.201926 master-0 kubenswrapper[4035]: I0319 09:18:19.201902 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" event={"ID":"012cdc1d-ebc8-431e-9a52-9a39de95dd0d","Type":"ContainerStarted","Data":"1504c38858cfd6dba74a1e8e13c6787eab9fb680b233330961a4b98abfa59449"} Mar 19 09:18:19.202706 master-0 kubenswrapper[4035]: I0319 09:18:19.202685 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" event={"ID":"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff","Type":"ContainerStarted","Data":"eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071"} Mar 19 09:18:19.203458 master-0 kubenswrapper[4035]: I0319 09:18:19.203425 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" event={"ID":"17e0cb4a-e776-4886-927e-ae446af7f234","Type":"ContainerStarted","Data":"42427cdb4004876179dcfbd8f19dca1e35b1708032ece70b1b2417c09bcc6b09"} Mar 19 09:18:19.204496 master-0 kubenswrapper[4035]: E0319 09:18:19.204471 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" podUID="17e0cb4a-e776-4886-927e-ae446af7f234" Mar 19 09:18:19.504490 master-0 kubenswrapper[4035]: I0319 09:18:19.504306 4035 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 09:18:19.546129 master-0 kubenswrapper[4035]: I0319 09:18:19.545673 4035 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 09:18:19.638571 master-0 kubenswrapper[4035]: I0319 09:18:19.638389 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:19.638571 master-0 kubenswrapper[4035]: I0319 09:18:19.638433 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:19.638571 master-0 kubenswrapper[4035]: I0319 09:18:19.638449 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:19.638571 master-0 kubenswrapper[4035]: I0319 09:18:19.638468 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:19.638571 master-0 kubenswrapper[4035]: I0319 09:18:19.638492 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:19.638858 master-0 kubenswrapper[4035]: E0319 09:18:19.638727 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:19.638858 master-0 kubenswrapper[4035]: E0319 09:18:19.638807 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.638786131 +0000 UTC m=+111.097401072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:19.639312 master-0 kubenswrapper[4035]: E0319 09:18:19.639014 4035 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:19.639312 master-0 kubenswrapper[4035]: E0319 09:18:19.639061 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.639047339 +0000 UTC m=+111.097662280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:19.639312 master-0 kubenswrapper[4035]: E0319 09:18:19.639094 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:19.639312 master-0 kubenswrapper[4035]: E0319 09:18:19.639114 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.639107751 +0000 UTC m=+111.097722692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:19.639312 master-0 kubenswrapper[4035]: I0319 09:18:19.639278 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:19.639478 master-0 kubenswrapper[4035]: I0319 09:18:19.639347 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:19.639478 master-0 kubenswrapper[4035]: I0319 09:18:19.639379 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:19.639478 master-0 kubenswrapper[4035]: I0319 09:18:19.639429 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:19.639478 master-0 kubenswrapper[4035]: I0319 09:18:19.639457 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:19.639607 master-0 kubenswrapper[4035]: I0319 09:18:19.639487 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:19.639607 master-0 kubenswrapper[4035]: I0319 09:18:19.639514 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:19.639607 master-0 kubenswrapper[4035]: I0319 09:18:19.639554 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:19.639607 master-0 kubenswrapper[4035]: I0319 09:18:19.639591 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:19.639728 master-0 kubenswrapper[4035]: I0319 09:18:19.639617 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:19.639728 master-0 kubenswrapper[4035]: I0319 09:18:19.639643 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:19.639777 master-0 kubenswrapper[4035]: E0319 09:18:19.639761 4035 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:19.639810 master-0 kubenswrapper[4035]: E0319 09:18:19.639791 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.63978245 +0000 UTC m=+111.098397391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:19.639857 master-0 kubenswrapper[4035]: E0319 09:18:19.639831 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:19.639893 master-0 kubenswrapper[4035]: E0319 09:18:19.639860 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:20.639851462 +0000 UTC m=+110.098466493 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:19.639937 master-0 kubenswrapper[4035]: E0319 09:18:19.639910 4035 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:19.639937 master-0 kubenswrapper[4035]: E0319 09:18:19.639935 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.639926534 +0000 UTC m=+111.098541475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:19.639987 master-0 kubenswrapper[4035]: E0319 09:18:19.639976 4035 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:19.640016 master-0 kubenswrapper[4035]: E0319 09:18:19.639999 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.639991246 +0000 UTC m=+111.098606197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640046 4035 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640080 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.640071958 +0000 UTC m=+111.098686899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640124 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640147 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.64013956 +0000 UTC m=+111.098754571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640191 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640213 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.640205712 +0000 UTC m=+111.098820653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640252 4035 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640274 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.640266353 +0000 UTC m=+111.098881384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640848 4035 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:19.640976 master-0 kubenswrapper[4035]: E0319 09:18:19.640950 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.640937482 +0000 UTC m=+111.099552423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:19.641276 master-0 kubenswrapper[4035]: E0319 09:18:19.641004 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:19.641276 master-0 kubenswrapper[4035]: E0319 09:18:19.641032 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.641021985 +0000 UTC m=+111.099636926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:19.641276 master-0 kubenswrapper[4035]: E0319 09:18:19.641126 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:19.641276 master-0 kubenswrapper[4035]: E0319 09:18:19.641202 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:21.641181619 +0000 UTC m=+111.099796580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:19.641575 master-0 kubenswrapper[4035]: I0319 09:18:19.641477 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:19.655059 master-0 kubenswrapper[4035]: I0319 09:18:19.655003 4035 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:19.719072 master-0 kubenswrapper[4035]: I0319 09:18:19.718977 4035 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" podStartSLOduration=73.718953297 podStartE2EDuration="1m13.718953297s" podCreationTimestamp="2026-03-19 09:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:19.717221408 +0000 UTC m=+109.175836349" watchObservedRunningTime="2026-03-19 09:18:19.718953297 +0000 UTC m=+109.177568258" Mar 19 09:18:19.789157 master-0 kubenswrapper[4035]: I0319 09:18:19.788725 4035 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:19.973216 master-0 kubenswrapper[4035]: I0319 09:18:19.973166 4035 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m"] Mar 19 09:18:20.215932 master-0 kubenswrapper[4035]: I0319 09:18:20.215852 4035 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" event={"ID":"70258988-8374-4aee-aaa2-be3c2e853062","Type":"ContainerStarted","Data":"6814e0600083f0996ce4c3d6eefe5646615f1a2b02ab21e27a25e1eb855f75c6"} Mar 19 09:18:20.217411 master-0 kubenswrapper[4035]: E0319 09:18:20.217358 4035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" podUID="17e0cb4a-e776-4886-927e-ae446af7f234" Mar 19 09:18:20.653942 master-0 kubenswrapper[4035]: I0319 09:18:20.653903 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:20.654139 master-0 kubenswrapper[4035]: E0319 09:18:20.654116 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:20.654244 master-0 kubenswrapper[4035]: E0319 09:18:20.654207 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:22.654177511 +0000 UTC m=+112.112792472 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:21.665174 master-0 kubenswrapper[4035]: I0319 09:18:21.665078 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: I0319 09:18:21.665237 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: I0319 09:18:21.665287 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665308 4035 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: I0319 09:18:21.665353 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665403 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.665381352 +0000 UTC m=+115.123996313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665449 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665467 4035 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665525 4035 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665531 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.665507295 +0000 UTC m=+115.124122326 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665634 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.665611018 +0000 UTC m=+115.124225999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:21.665659 master-0 kubenswrapper[4035]: E0319 09:18:21.665666 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.665650369 +0000 UTC m=+115.124265350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: I0319 09:18:21.665697 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: I0319 09:18:21.665745 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: I0319 09:18:21.665815 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: E0319 09:18:21.665843 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: E0319 09:18:21.665891 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.665873976 +0000 UTC m=+115.124489037 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: E0319 09:18:21.665946 4035 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: I0319 09:18:21.665956 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: E0319 09:18:21.666025 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666004009 +0000 UTC m=+115.124618990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: I0319 09:18:21.666067 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:21.666095 master-0 kubenswrapper[4035]: E0319 09:18:21.666024 4035 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: I0319 09:18:21.666126 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666169 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666123983 +0000 UTC m=+115.124739064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: I0319 09:18:21.666204 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666248 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666252 4035 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: I0319 09:18:21.666271 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666292 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666276527 +0000 UTC m=+115.124891508 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: I0319 09:18:21.666323 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666359 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666382 4035 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666362 4035 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:21.666468 master-0 kubenswrapper[4035]: E0319 09:18:21.666395 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666381 +0000 UTC m=+115.124995981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:21.666929 master-0 kubenswrapper[4035]: E0319 09:18:21.666501 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666468243 +0000 UTC m=+115.125083294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:21.666929 master-0 kubenswrapper[4035]: E0319 09:18:21.666520 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666511544 +0000 UTC m=+115.125126625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:21.666929 master-0 kubenswrapper[4035]: E0319 09:18:21.666582 4035 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:21.666929 master-0 kubenswrapper[4035]: E0319 09:18:21.666659 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666534915 +0000 UTC m=+115.125150006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:21.666929 master-0 kubenswrapper[4035]: E0319 09:18:21.666681 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:25.666673118 +0000 UTC m=+115.125288179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:22.347999 master-0 kubenswrapper[4035]: I0319 09:18:22.347672 4035 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 09:18:22.678223 master-0 kubenswrapper[4035]: I0319 09:18:22.678117 4035 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:22.678685 master-0 kubenswrapper[4035]: E0319 09:18:22.678304 4035 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:22.678685 master-0 kubenswrapper[4035]: E0319 09:18:22.678406 4035 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.678382244 +0000 UTC m=+116.136997245 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:24.189831 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 19 09:18:24.246620 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 09:18:24.246848 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 19 09:18:24.248286 master-0 systemd[1]: kubelet.service: Consumed 8.123s CPU time. Mar 19 09:18:24.262710 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 19 09:18:24.384610 master-0 kubenswrapper[7385]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:18:24.384610 master-0 kubenswrapper[7385]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 19 09:18:24.384610 master-0 kubenswrapper[7385]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:18:24.384610 master-0 kubenswrapper[7385]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:18:24.385904 master-0 kubenswrapper[7385]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 09:18:24.385904 master-0 kubenswrapper[7385]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:18:24.385904 master-0 kubenswrapper[7385]: I0319 09:18:24.384744 7385 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 09:18:24.388104 master-0 kubenswrapper[7385]: W0319 09:18:24.388077 7385 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388117 7385 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388125 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388132 7385 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388138 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388145 7385 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388150 7385 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388155 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388161 7385 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388166 7385 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388171 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388177 7385 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388183 7385 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388189 7385 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:18:24.388176 master-0 kubenswrapper[7385]: W0319 09:18:24.388196 7385 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388201 7385 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388207 7385 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388212 7385 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388218 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388223 7385 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388228 7385 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388233 7385 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388239 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388251 7385 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388257 7385 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388263 7385 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388268 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388273 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388278 7385 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388283 7385 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388289 7385 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388297 7385 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388303 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:18:24.388775 master-0 kubenswrapper[7385]: W0319 09:18:24.388309 7385 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388314 7385 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388319 7385 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388333 7385 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388339 7385 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388346 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388352 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388357 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388363 7385 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388368 7385 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388373 7385 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388380 7385 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388386 7385 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388393 7385 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388399 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388407 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388461 7385 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388468 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388474 7385 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:18:24.389392 master-0 kubenswrapper[7385]: W0319 09:18:24.388479 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388487 7385 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388493 7385 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388499 7385 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388504 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388510 7385 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388515 7385 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388520 7385 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388525 7385 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388531 7385 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388536 7385 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388557 7385 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388563 7385 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388568 7385 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388573 7385 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388579 7385 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388584 7385 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388589 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388603 7385 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: W0319 09:18:24.388608 7385 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: I0319 09:18:24.388726 7385 flags.go:64] FLAG: --address="0.0.0.0" Mar 19 09:18:24.390026 master-0 kubenswrapper[7385]: I0319 09:18:24.388738 7385 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388747 7385 flags.go:64] FLAG: --anonymous-auth="true" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388755 7385 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388762 7385 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388769 7385 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388777 7385 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388784 7385 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388792 7385 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388799 7385 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388806 7385 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388812 7385 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388818 7385 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388825 7385 flags.go:64] FLAG: --cgroup-root="" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388831 7385 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388837 7385 flags.go:64] FLAG: --client-ca-file="" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388843 7385 flags.go:64] FLAG: --cloud-config="" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388849 7385 flags.go:64] FLAG: --cloud-provider="" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388855 7385 flags.go:64] FLAG: --cluster-dns="[]" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388864 7385 flags.go:64] FLAG: --cluster-domain="" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388871 7385 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388877 7385 flags.go:64] FLAG: --config-dir="" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388883 7385 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388889 7385 flags.go:64] FLAG: --container-log-max-files="5" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388896 7385 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 19 09:18:24.390707 master-0 kubenswrapper[7385]: I0319 09:18:24.388902 7385 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388909 7385 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388915 7385 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388921 7385 flags.go:64] FLAG: --contention-profiling="false" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388927 7385 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388935 7385 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388942 7385 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388947 7385 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388955 7385 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388961 7385 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388967 7385 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388973 7385 flags.go:64] FLAG: --enable-load-reader="false" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388980 7385 flags.go:64] FLAG: --enable-server="true" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388986 7385 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.388994 7385 flags.go:64] FLAG: --event-burst="100" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389000 7385 flags.go:64] FLAG: --event-qps="50" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389006 7385 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389012 7385 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389018 7385 flags.go:64] FLAG: --eviction-hard="" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389025 7385 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389031 7385 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389037 7385 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389043 7385 flags.go:64] FLAG: --eviction-soft="" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389049 7385 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389055 7385 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 19 09:18:24.391464 master-0 kubenswrapper[7385]: I0319 09:18:24.389061 7385 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389067 7385 flags.go:64] FLAG: --experimental-mounter-path="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389073 7385 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389079 7385 flags.go:64] FLAG: --fail-swap-on="true" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389084 7385 flags.go:64] FLAG: --feature-gates="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389091 7385 flags.go:64] FLAG: --file-check-frequency="20s" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389097 7385 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389104 7385 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389110 7385 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389116 7385 flags.go:64] FLAG: --healthz-port="10248" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389122 7385 flags.go:64] FLAG: --help="false" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389128 7385 flags.go:64] FLAG: --hostname-override="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389134 7385 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389141 7385 flags.go:64] FLAG: --http-check-frequency="20s" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389147 7385 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389153 7385 flags.go:64] FLAG: --image-credential-provider-config="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389158 7385 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389166 7385 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389173 7385 flags.go:64] FLAG: --image-service-endpoint="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389182 7385 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389188 7385 flags.go:64] FLAG: --kube-api-burst="100" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389194 7385 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389201 7385 flags.go:64] FLAG: --kube-api-qps="50" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389207 7385 flags.go:64] FLAG: --kube-reserved="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389213 7385 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389218 7385 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 19 09:18:24.392288 master-0 kubenswrapper[7385]: I0319 09:18:24.389225 7385 flags.go:64] FLAG: --kubelet-cgroups="" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389230 7385 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389237 7385 flags.go:64] FLAG: --lock-file="" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389243 7385 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389248 7385 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389255 7385 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389263 7385 flags.go:64] FLAG: --log-json-split-stream="false" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389269 7385 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389303 7385 flags.go:64] FLAG: --log-text-split-stream="false" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389311 7385 flags.go:64] FLAG: --logging-format="text" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389317 7385 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389325 7385 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389331 7385 flags.go:64] FLAG: --manifest-url="" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389337 7385 flags.go:64] FLAG: --manifest-url-header="" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389382 7385 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389509 7385 flags.go:64] FLAG: --max-open-files="1000000" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389519 7385 flags.go:64] FLAG: --max-pods="110" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389525 7385 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389568 7385 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389576 7385 flags.go:64] FLAG: --memory-manager-policy="None" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389582 7385 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389589 7385 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389595 7385 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389601 7385 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 19 09:18:24.393155 master-0 kubenswrapper[7385]: I0319 09:18:24.389615 7385 flags.go:64] FLAG: --node-status-max-images="50" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389621 7385 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389653 7385 flags.go:64] FLAG: --oom-score-adj="-999" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389662 7385 flags.go:64] FLAG: --pod-cidr="" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389770 7385 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389782 7385 flags.go:64] FLAG: --pod-manifest-path="" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389788 7385 flags.go:64] FLAG: --pod-max-pids="-1" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389794 7385 flags.go:64] FLAG: --pods-per-core="0" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389800 7385 flags.go:64] FLAG: --port="10250" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389808 7385 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389814 7385 flags.go:64] FLAG: --provider-id="" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389819 7385 flags.go:64] FLAG: --qos-reserved="" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389826 7385 flags.go:64] FLAG: --read-only-port="10255" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389832 7385 flags.go:64] FLAG: --register-node="true" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389838 7385 flags.go:64] FLAG: --register-schedulable="true" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389844 7385 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389854 7385 flags.go:64] FLAG: --registry-burst="10" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389860 7385 flags.go:64] FLAG: --registry-qps="5" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389866 7385 flags.go:64] FLAG: --reserved-cpus="" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389871 7385 flags.go:64] FLAG: --reserved-memory="" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389878 7385 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389885 7385 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389891 7385 flags.go:64] FLAG: --rotate-certificates="false" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389898 7385 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389904 7385 flags.go:64] FLAG: --runonce="false" Mar 19 09:18:24.393941 master-0 kubenswrapper[7385]: I0319 09:18:24.389910 7385 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389916 7385 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389922 7385 flags.go:64] FLAG: --seccomp-default="false" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389928 7385 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389934 7385 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389941 7385 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389947 7385 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389953 7385 flags.go:64] FLAG: --storage-driver-password="root" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389959 7385 flags.go:64] FLAG: --storage-driver-secure="false" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389965 7385 flags.go:64] FLAG: --storage-driver-table="stats" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389971 7385 flags.go:64] FLAG: --storage-driver-user="root" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389977 7385 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389983 7385 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389989 7385 flags.go:64] FLAG: --system-cgroups="" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.389996 7385 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390006 7385 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390012 7385 flags.go:64] FLAG: --tls-cert-file="" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390018 7385 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390026 7385 flags.go:64] FLAG: --tls-min-version="" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390032 7385 flags.go:64] FLAG: --tls-private-key-file="" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390038 7385 flags.go:64] FLAG: --topology-manager-policy="none" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390045 7385 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390051 7385 flags.go:64] FLAG: --topology-manager-scope="container" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390057 7385 flags.go:64] FLAG: --v="2" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390066 7385 flags.go:64] FLAG: --version="false" Mar 19 09:18:24.394813 master-0 kubenswrapper[7385]: I0319 09:18:24.390073 7385 flags.go:64] FLAG: --vmodule="" Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: I0319 09:18:24.390080 7385 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: I0319 09:18:24.390087 7385 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390253 7385 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390260 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390266 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390272 7385 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390278 7385 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390284 7385 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390289 7385 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390294 7385 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390299 7385 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390305 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390310 7385 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390315 7385 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390320 7385 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390325 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390330 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390335 7385 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390341 7385 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:18:24.395686 master-0 kubenswrapper[7385]: W0319 09:18:24.390346 7385 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390351 7385 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390357 7385 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390362 7385 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390367 7385 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390373 7385 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390378 7385 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390384 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390389 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390396 7385 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390401 7385 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390406 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390414 7385 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390421 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390428 7385 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390437 7385 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390444 7385 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390450 7385 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390456 7385 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:18:24.396486 master-0 kubenswrapper[7385]: W0319 09:18:24.390462 7385 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390469 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390476 7385 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390481 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390488 7385 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390494 7385 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390500 7385 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390505 7385 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390511 7385 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390517 7385 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390522 7385 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390527 7385 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390533 7385 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390538 7385 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390564 7385 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390570 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390575 7385 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390580 7385 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390586 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:18:24.397129 master-0 kubenswrapper[7385]: W0319 09:18:24.390591 7385 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390596 7385 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390601 7385 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390610 7385 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390615 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390621 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390626 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390632 7385 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390637 7385 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390644 7385 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390649 7385 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390654 7385 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390660 7385 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390665 7385 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390670 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390675 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:18:24.397823 master-0 kubenswrapper[7385]: W0319 09:18:24.390681 7385 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:18:24.398424 master-0 kubenswrapper[7385]: I0319 09:18:24.390690 7385 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:18:24.398424 master-0 kubenswrapper[7385]: I0319 09:18:24.398195 7385 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 19 09:18:24.398424 master-0 kubenswrapper[7385]: I0319 09:18:24.398336 7385 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 09:18:24.398534 master-0 kubenswrapper[7385]: W0319 09:18:24.398523 7385 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:18:24.398534 master-0 kubenswrapper[7385]: W0319 09:18:24.398533 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398552 7385 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398558 7385 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398563 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398567 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398571 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398576 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398581 7385 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398585 7385 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:18:24.398618 master-0 kubenswrapper[7385]: W0319 09:18:24.398589 7385 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398721 7385 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398728 7385 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398732 7385 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398735 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398739 7385 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398743 7385 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398746 7385 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398750 7385 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398754 7385 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398758 7385 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398761 7385 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398764 7385 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398768 7385 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398771 7385 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398775 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398778 7385 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398782 7385 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398785 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398788 7385 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:18:24.398914 master-0 kubenswrapper[7385]: W0319 09:18:24.398793 7385 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398798 7385 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398803 7385 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398885 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398892 7385 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398898 7385 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398901 7385 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398905 7385 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398910 7385 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398915 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398919 7385 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398922 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398926 7385 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398930 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398934 7385 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398937 7385 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398941 7385 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398944 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398948 7385 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:18:24.399619 master-0 kubenswrapper[7385]: W0319 09:18:24.398951 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.398955 7385 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.398959 7385 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399050 7385 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399056 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399060 7385 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399064 7385 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399068 7385 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399071 7385 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399075 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399079 7385 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399083 7385 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399086 7385 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399090 7385 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399142 7385 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399147 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399151 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399155 7385 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399231 7385 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:18:24.400301 master-0 kubenswrapper[7385]: W0319 09:18:24.399236 7385 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399239 7385 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399245 7385 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399249 7385 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: I0319 09:18:24.399256 7385 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399459 7385 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399468 7385 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399475 7385 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399479 7385 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399483 7385 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399487 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399491 7385 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399495 7385 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399498 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399502 7385 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:18:24.400994 master-0 kubenswrapper[7385]: W0319 09:18:24.399506 7385 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399510 7385 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399514 7385 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399517 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399521 7385 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399524 7385 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399527 7385 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399531 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399681 7385 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399689 7385 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399692 7385 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399697 7385 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399702 7385 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399706 7385 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399710 7385 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399714 7385 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399717 7385 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399721 7385 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399724 7385 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399728 7385 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:18:24.401507 master-0 kubenswrapper[7385]: W0319 09:18:24.399731 7385 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399735 7385 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399738 7385 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399743 7385 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399748 7385 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399751 7385 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399755 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399758 7385 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399762 7385 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399765 7385 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399769 7385 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399774 7385 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399778 7385 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399782 7385 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399793 7385 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399797 7385 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399801 7385 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399805 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399808 7385 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399812 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:18:24.402163 master-0 kubenswrapper[7385]: W0319 09:18:24.399815 7385 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399819 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399823 7385 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399826 7385 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399830 7385 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399833 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399837 7385 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399863 7385 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399867 7385 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399871 7385 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399874 7385 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399877 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399881 7385 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399885 7385 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399889 7385 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399892 7385 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399896 7385 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399900 7385 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399903 7385 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399908 7385 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:18:24.402819 master-0 kubenswrapper[7385]: W0319 09:18:24.399912 7385 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: W0319 09:18:24.399916 7385 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.399923 7385 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.400155 7385 server.go:940] "Client rotation is on, will bootstrap in background" Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.401866 7385 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.401936 7385 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.402122 7385 server.go:997] "Starting client certificate rotation" Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.402131 7385 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.402337 7385 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-20 09:09:07 +0000 UTC, rotation deadline is 2026-03-20 02:54:23.274707726 +0000 UTC Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.402417 7385 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h35m58.872298206s for next certificate rotation Mar 19 09:18:24.403598 master-0 kubenswrapper[7385]: I0319 09:18:24.402718 7385 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 09:18:24.404139 master-0 kubenswrapper[7385]: I0319 09:18:24.404083 7385 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 09:18:24.407017 master-0 kubenswrapper[7385]: I0319 09:18:24.406989 7385 log.go:25] "Validated CRI v1 runtime API" Mar 19 09:18:24.409532 master-0 kubenswrapper[7385]: I0319 09:18:24.409501 7385 log.go:25] "Validated CRI v1 image API" Mar 19 09:18:24.410686 master-0 kubenswrapper[7385]: I0319 09:18:24.410663 7385 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 19 09:18:24.415274 master-0 kubenswrapper[7385]: I0319 09:18:24.415234 7385 fs.go:135] Filesystem UUIDs: map[433c3f11-76c1-4144-a2fc-7b9790746712:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 19 09:18:24.415661 master-0 kubenswrapper[7385]: I0319 09:18:24.415269 7385 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65/userdata/shm major:0 minor:98 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1504c38858cfd6dba74a1e8e13c6787eab9fb680b233330961a4b98abfa59449/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1504c38858cfd6dba74a1e8e13c6787eab9fb680b233330961a4b98abfa59449/userdata/shm major:0 minor:306 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6/userdata/shm major:0 minor:230 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b99a9e40477692f9f0735d27cce4c13db8b181a07746d8c9e160e5b7831c820/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b99a9e40477692f9f0735d27cce4c13db8b181a07746d8c9e160e5b7831c820/userdata/shm major:0 minor:236 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3c61e204454e38428fa04296fdaa0b86068d8df14b3972facff7186f87934a5b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3c61e204454e38428fa04296fdaa0b86068d8df14b3972facff7186f87934a5b/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/42427cdb4004876179dcfbd8f19dca1e35b1708032ece70b1b2417c09bcc6b09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/42427cdb4004876179dcfbd8f19dca1e35b1708032ece70b1b2417c09bcc6b09/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6814e0600083f0996ce4c3d6eefe5646615f1a2b02ab21e27a25e1eb855f75c6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6814e0600083f0996ce4c3d6eefe5646615f1a2b02ab21e27a25e1eb855f75c6/userdata/shm major:0 minor:317 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d/userdata/shm major:0 minor:292 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e924b0646dc2650e31e1b4cadf6eac6293c32b11a283f47d90fa34c50c73d4f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e924b0646dc2650e31e1b4cadf6eac6293c32b11a283f47d90fa34c50c73d4f0/userdata/shm major:0 minor:243 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eab66404c12034ae89f04e45ade44912e55d6fddf5edcf6fc585e549c9b0d555/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eab66404c12034ae89f04e45ade44912e55d6fddf5edcf6fc585e549c9b0d555/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~projected/kube-api-access-x2hfh:{mountpoint:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~projected/kube-api-access-x2hfh major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09cc190d-5647-40a1-bfe9-5355bcb33b10/volumes/kubernetes.io~projected/kube-api-access-4w5fk:{mountpoint:/var/lib/kubelet/pods/09cc190d-5647-40a1-bfe9-5355bcb33b10/volumes/kubernetes.io~projected/kube-api-access-4w5fk major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~projected/kube-api-access-zbw6q:{mountpoint:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~projected/kube-api-access-zbw6q major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~projected/kube-api-access-7thvr:{mountpoint:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~projected/kube-api-access-7thvr major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16c631c1-277e-47d2-9377-a0bbd14673d4/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/16c631c1-277e-47d2-9377-a0bbd14673d4/volumes/kubernetes.io~projected/kube-api-access major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~projected/kube-api-access-85vjd:{mountpoint:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~projected/kube-api-access-85vjd major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~projected/kube-api-access-jrdvd:{mountpoint:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~projected/kube-api-access-jrdvd major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/etcd-client major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~projected/kube-api-access-47plx:{mountpoint:/var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~projected/kube-api-access-47plx major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3816f149-ddce-41c8-a540-fe866ee71c5e/volumes/kubernetes.io~projected/kube-api-access-7plsz:{mountpoint:/var/lib/kubelet/pods/3816f149-ddce-41c8-a540-fe866ee71c5e/volumes/kubernetes.io~projected/kube-api-access-7plsz major:0 minor:284 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~projected/kube-api-access-qvnp7:{mountpoint:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~projected/kube-api-access-qvnp7 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fca1a4-4fa7-4a43-b9c4-7f50a8737643/volumes/kubernetes.io~projected/kube-api-access-mbktm:{mountpoint:/var/lib/kubelet/pods/43fca1a4-4fa7-4a43-b9c4-7f50a8737643/volumes/kubernetes.io~projected/kube-api-access-mbktm major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~projected/kube-api-access-8l8cg:{mountpoint:/var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~projected/kube-api-access-8l8cg major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~projected/kube-api-access major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~projected/kube-api-access-wpcnv:{mountpoint:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~projected/kube-api-access-wpcnv major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~secret/webhook-cert major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~projected/kube-api-access-8s7rj:{mountpoint:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~projected/kube-api-access-8s7rj major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~projected/kube-api-access major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~projected/kube-api-access-4xjhk:{mountpoint:/var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~projected/kube-api-access-4xjhk major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60683578-6673-4aff-b1d5-3167d534ac08/volumes/kubernetes.io~projected/kube-api-access-zcmdk:{mountpoint:/var/lib/kubelet/pods/60683578-6673-4aff-b1d5-3167d534ac08/volumes/kubernetes.io~projected/kube-api-access-zcmdk major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/672ad0aa-a0c5-4640-840d-3ffa02c55d62/volumes/kubernetes.io~projected/kube-api-access-t58zw:{mountpoint:/var/lib/kubelet/pods/672ad0aa-a0c5-4640-840d-3ffa02c55d62/volumes/kubernetes.io~projected/kube-api-access-t58zw major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~projected/kube-api-access-h925l:{mountpoint:/var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~projected/kube-api-access-h925l major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~projected/kube-api-access-tr4bl:{mountpoint:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~projected/kube-api-access-tr4bl major:0 minor:313 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~secret/serving-cert major:0 minor:316 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~projected/kube-api-access-bnxk9:{mountpoint:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~projected/kube-api-access-bnxk9 major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/kube-api-access-rbzvl:{mountpoint:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/kube-api-access-rbzvl major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~projected/kube-api-access-m8b7s:{mountpoint:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~projected/kube-api-access-m8b7s major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~projected/kube-api-access-c654s:{mountpoint:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~projected/kube-api-access-c654s major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~projected/kube-api-access-smvtc:{mountpoint:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~projected/kube-api-access-smvtc major:0 minor:103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~secret/metrics-tls major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~projected/kube-api-access-dt99t:{mountpoint:/var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~projected/kube-api-access-dt99t major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~projected/kube-api-access-cjnjq:{mountpoint:/var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~projected/kube-api-access-cjnjq major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~projected/kube-api-access-4tfnn:{mountpoint:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~projected/kube-api-access-4tfnn major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~projected/kube-api-access-hccqk:{mountpoint:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~projected/kube-api-access-hccqk major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/kube-api-access-x4n26:{mountpoint:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/kube-api-access-x4n26 major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~projected/kube-api-access-2svkc:{mountpoint:/var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~projected/kube-api-access-2svkc major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~projected/kube-api-access-tll8k:{mountpoint:/var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~projected/kube-api-access-tll8k major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~projected/kube-api-access-r8bm4:{mountpoint:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~projected/kube-api-access-r8bm4 major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/2904620dda59a0bcdd59d19b18e02e96e9bfe0c84f2a88f01e0606a73cab5341/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/ca39386eee2590b86c0a99acb21d0282c1275e254621fe3038c285dd41f41ee0/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/3e16e6aa8005f57eef642addd7e897f5563a62bba88b8f51cfe1fa2bb49faad4/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/ce0342f08be1ad677881da1fb3452c8c36e4d67ddab2383634906b443ecb8d62/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/ea95dfb0f637454790e996bca5f05544200a8893caf3bebe22d9b0aaa8ab9cd0/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/66fdd1d95ba0cdac8c3a4b718f78204ca09b28a17654a9202a475ff1ad6ce07d/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/4c55a61835eaa8ea2f6ba609e47d5fde6f7e1042ab1ee78896c0396d5cc9c28e/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/662b8e86db31eaf0280c1f823ae78c9798f8b0aa9aefa8f7fa85a2a523919c07/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/a27374ce4ecd442d427d7793911d2be44cd2b6f573055540f6cd5347a3d5d1d0/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/9dc8b2540527dc86a59db8a8d04290eff7d4d6a10569534c7ed642d46ee5f99a/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/dfe7b27489dd4b692527c6b2ae01df01ed51ee20b4c8eb27274ca32c68de71aa/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/9494e9dccc5b85677cd6010f8166c04283b373c2f0b15415c3605eeb1ff3e1d9/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/adef079a9c929fc5112d92b3fe3954673fe9f430dcfbd66d913f7b60bae632ee/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/e8676547543c087854b294f0b5a5011e84617557e66bf4a7778f3a8ddca85939/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/8ce953a703adb3190036eb90c0274dbf8b18e39e39487de5d5182d916a189b17/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/9f4ef13e2a16bc215dcf1db9b6805b6a8fa43ab3000de0680df181dd64d3434d/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/74e4df1edaea55b7254f6da06dc5aab31a6aa44d63aeb4954c94ba7e4b6c23e5/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/dbcbb63f73e77f44ffdcd28f8d1c7f4016cf49787c7484ae5ee2615deb92f109/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/6430b9f83e64d4f4df986656d12d3b189bb6439165ae5663d6c8645c0ca71aac/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/83850342d3de4dee842d0530d52a953a1d0ca66739cc3837f15d3fe687347a70/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/d1663bc1f9a653636bcc8ddfa58552fc7fbdd6fc29dddb07391195367829528c/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/8d8bcf09de56103234b5b43dbde5d875962fa65b1af5862d21ec60fff264550c/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-251:{mountpoint:/var/lib/containers/storage/overlay/91d1d35772902720a039771c2b348d76f81adeacf4e9363f00e52d88579745b3/merged major:0 minor:251 fsType:overlay blockSize:0} overlay_0-253:{mountpoint:/var/lib/containers/storage/overlay/23825ba8eb4a338448de648afe2df972b37e8a775d29c3e8b0e68cd6ea087d12/merged major:0 minor:253 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/8a917b909955e4c8a038cf09eaf92798d2be9aca55c80d573ce0f1d7a064c02f/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-266:{mountpoint:/var/lib/containers/storage/overlay/d9607c509c265ac6aadf2928e0954775b15d6a0d1b7e58f75452ec0573d57041/merged major:0 minor:266 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/c5a19326b9b6995f871cdaac34c49cbff0f75a5b169cf0c1099a1e4ac2d44436/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/ee08dbac3cdbaa14412c1920efa7e12316a52bff5711479e93824f4c85099d1a/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/c55a17798997026925a872ac772cd65dc48ad4757ace7594bd50d629497713b2/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/0c296d31575c30a6fd89d3b4201b986e85ed3cc3e030fdc9208455397feb2c1d/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/c76a3d692606db9bfd36450cfa72529c219dfa7ff18fc95754683f3daec3854a/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/e128a3d25d1c0d399dd979135719edbc61f916cedccaa663403b4da33a37031c/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/0b6c9caad65671be7f4fea40ffc6d71e159823658021ed1608695cae77b7c685/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/5cd4f9d50bc5b8c686ad6a532716609cfc2d99b9d115a3c6b863ae1115c71420/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/b0a97dcd648b7ee621dbf960ce554aaaf6819be74a70ef2078ed795ef815c737/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/96eac3dc8c82fef6c5ccad5bf64c7018253c61217eb3dd430af24d33fe888e8e/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/0c51e2a0d087074f6fc972e9cb72d7a93cb1edb58db4e5db7167ab87ed14e73b/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/88cf48e33945613db7877ef2abc1f8225822ad30e0916cc5d4d55e94973ba95f/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/a9b973500e27ccb2428d3efabde2ec9c3c77369043a91673b0e4c1345d3575b3/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/fd6398ae8c6aa0e179209e34612571593c1c52674b904ec7ed7c7675c18a3f63/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/3eb2aa57aee755d94a65eeabafa884b9f257693b6d986895170141787ef2b9c0/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/e419953e648c6d046cc05ef98936fbb843cfc02da39b32d8324478354c718cc4/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/d5198bfa70979871de4f8bda4fc9e8bbd72c03d439393121eb08d097c0e08f33/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/72662d4ebad7856cc7e02b3e21ee1048d5cddf5925f964b8ac57a2849c8ff35a/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/3a407e6cb1158c3711f9a91b375c87a701851126ec2d521d25e93c045446c391/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/81886ef0b24f13a9f2254201c166e6bffefa3239e3a0ec6e7c0cf001ad6fa7b5/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/a298d318af20480b458acd93af91559e327d82b83cdec2db5b6e85ac3c55aece/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/6ef75f279a225d22224cba53f7a64e498a4bf51e929ce8ec83213367f94cb1ab/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/6a2c3e8dabf9b11a02e7792035acd9c0e4a6e89c7213192578fc92e5b5072fdd/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/bed15894a7453bcbcb8f7f6fd18f70677fbfaf5340f1f3a3cfeb90981ca4e457/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 19 09:18:24.449306 master-0 kubenswrapper[7385]: I0319 09:18:24.448522 7385 manager.go:217] Machine: {Timestamp:2026-03-19 09:18:24.447465306 +0000 UTC m=+0.121895047 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:3e104eb08e5948b08517e4448d4a842b SystemUUID:3e104eb0-8e59-48b0-8517-e4448d4a842b BootID:5d651922-4f48-42db-81f8-e0fd55710ee7 Filesystems:[{Device:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~projected/kube-api-access-85vjd DeviceMajor:0 DeviceMinor:283 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~projected/kube-api-access-m8b7s DeviceMajor:0 DeviceMinor:298 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~projected/kube-api-access-4xjhk DeviceMajor:0 DeviceMinor:275 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3c61e204454e38428fa04296fdaa0b86068d8df14b3972facff7186f87934a5b/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e924b0646dc2650e31e1b4cadf6eac6293c32b11a283f47d90fa34c50c73d4f0/userdata/shm DeviceMajor:0 DeviceMinor:243 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eab66404c12034ae89f04e45ade44912e55d6fddf5edcf6fc585e549c9b0d555/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16c631c1-277e-47d2-9377-a0bbd14673d4/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:94 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~projected/kube-api-access-8l8cg DeviceMajor:0 DeviceMinor:267 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~projected/kube-api-access-tll8k DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~projected/kube-api-access-jrdvd DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~projected/kube-api-access-c654s DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~projected/kube-api-access-8s7rj DeviceMajor:0 DeviceMinor:271 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~projected/kube-api-access-tr4bl DeviceMajor:0 DeviceMinor:313 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:141 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6814e0600083f0996ce4c3d6eefe5646615f1a2b02ab21e27a25e1eb855f75c6/userdata/shm DeviceMajor:0 DeviceMinor:317 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65/userdata/shm DeviceMajor:0 DeviceMinor:98 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-266 DeviceMajor:0 DeviceMinor:266 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/kube-api-access-rbzvl DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~projected/kube-api-access-r8bm4 DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d/userdata/shm DeviceMajor:0 DeviceMinor:292 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~projected/kube-api-access-47plx DeviceMajor:0 DeviceMinor:278 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~projected/kube-api-access-hccqk DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:260 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b99a9e40477692f9f0735d27cce4c13db8b181a07746d8c9e160e5b7831c820/userdata/shm DeviceMajor:0 DeviceMinor:236 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3816f149-ddce-41c8-a540-fe866ee71c5e/volumes/kubernetes.io~projected/kube-api-access-7plsz DeviceMajor:0 DeviceMinor:284 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~projected/kube-api-access-smvtc DeviceMajor:0 DeviceMinor:103 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~projected/kube-api-access-cjnjq DeviceMajor:0 DeviceMinor:261 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/42427cdb4004876179dcfbd8f19dca1e35b1708032ece70b1b2417c09bcc6b09/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/60683578-6673-4aff-b1d5-3167d534ac08/volumes/kubernetes.io~projected/kube-api-access-zcmdk DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1504c38858cfd6dba74a1e8e13c6787eab9fb680b233330961a4b98abfa59449/userdata/shm DeviceMajor:0 DeviceMinor:306 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:316 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/kube-api-access-x4n26 DeviceMajor:0 DeviceMinor:257 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~projected/kube-api-access-zbw6q DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-251 DeviceMajor:0 DeviceMinor:251 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~projected/kube-api-access-qvnp7 DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~projected/kube-api-access-h925l DeviceMajor:0 DeviceMinor:265 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~projected/kube-api-access-2svkc DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/43fca1a4-4fa7-4a43-b9c4-7f50a8737643/volumes/kubernetes.io~projected/kube-api-access-mbktm DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~projected/kube-api-access-bnxk9 DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6/userdata/shm DeviceMajor:0 DeviceMinor:230 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~projected/kube-api-access-x2hfh DeviceMajor:0 DeviceMinor:274 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~projected/kube-api-access-wpcnv DeviceMajor:0 DeviceMinor:140 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-253 DeviceMajor:0 DeviceMinor:253 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/09cc190d-5647-40a1-bfe9-5355bcb33b10/volumes/kubernetes.io~projected/kube-api-access-4w5fk DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~projected/kube-api-access-7thvr DeviceMajor:0 DeviceMinor:264 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~projected/kube-api-access-4tfnn DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/672ad0aa-a0c5-4640-840d-3ffa02c55d62/volumes/kubernetes.io~projected/kube-api-access-t58zw DeviceMajor:0 DeviceMinor:295 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~projected/kube-api-access-dt99t DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:1504c38858cfd6d MacAddress:d6:49:8c:48:c6:19 Speed:10000 Mtu:8900} {Name:227a0b9baae07c0 MacAddress:b2:69:c4:57:55:0a Speed:10000 Mtu:8900} {Name:2305598daf56c5c MacAddress:42:46:fa:f8:ac:79 Speed:10000 Mtu:8900} {Name:2b99a9e40477692 MacAddress:b2:d3:b6:1f:cb:f8 Speed:10000 Mtu:8900} {Name:3c61e204454e384 MacAddress:be:84:1f:9e:89:5a Speed:10000 Mtu:8900} {Name:42427cdb4004876 MacAddress:ea:25:3e:e8:1f:7f Speed:10000 Mtu:8900} {Name:6814e0600083f09 MacAddress:f2:44:a7:d8:85:ee Speed:10000 Mtu:8900} {Name:703bc73d8896572 MacAddress:a2:1b:ab:f3:26:23 Speed:10000 Mtu:8900} {Name:8cb029a2424e510 MacAddress:8a:ed:f8:a8:75:3f Speed:10000 Mtu:8900} {Name:8ceab37591fbebe MacAddress:52:d2:1b:85:a8:4e Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:7a:f2:ce:15:99:40 Speed:0 Mtu:8900} {Name:e924b0646dc2650 MacAddress:c2:aa:70:67:ee:e9 Speed:10000 Mtu:8900} {Name:eab66404c12034a MacAddress:26:02:3d:99:49:74 Speed:10000 Mtu:8900} {Name:eb3a8fcff4f5b0d MacAddress:42:6d:98:30:87:11 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:33:06:4c Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:34:d3:c4 Speed:-1 Mtu:9000} {Name:f0094116ac72664 MacAddress:0a:c5:28:af:d8:de Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:7e:e2:1c:fe:3d:73 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 19 09:18:24.449306 master-0 kubenswrapper[7385]: I0319 09:18:24.449250 7385 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 19 09:18:24.449669 master-0 kubenswrapper[7385]: I0319 09:18:24.449486 7385 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 19 09:18:24.449774 master-0 kubenswrapper[7385]: I0319 09:18:24.449744 7385 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 09:18:24.449945 master-0 kubenswrapper[7385]: I0319 09:18:24.449899 7385 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 09:18:24.450178 master-0 kubenswrapper[7385]: I0319 09:18:24.449941 7385 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 09:18:24.450226 master-0 kubenswrapper[7385]: I0319 09:18:24.450203 7385 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 09:18:24.450226 master-0 kubenswrapper[7385]: I0319 09:18:24.450218 7385 container_manager_linux.go:303] "Creating device plugin manager" Mar 19 09:18:24.450281 master-0 kubenswrapper[7385]: I0319 09:18:24.450229 7385 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 09:18:24.450281 master-0 kubenswrapper[7385]: I0319 09:18:24.450259 7385 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 09:18:24.450439 master-0 kubenswrapper[7385]: I0319 09:18:24.450400 7385 state_mem.go:36] "Initialized new in-memory state store" Mar 19 09:18:24.450560 master-0 kubenswrapper[7385]: I0319 09:18:24.450527 7385 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 19 09:18:24.450647 master-0 kubenswrapper[7385]: I0319 09:18:24.450630 7385 kubelet.go:418] "Attempting to sync node with API server" Mar 19 09:18:24.450680 master-0 kubenswrapper[7385]: I0319 09:18:24.450651 7385 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 09:18:24.450680 master-0 kubenswrapper[7385]: I0319 09:18:24.450668 7385 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 19 09:18:24.450738 master-0 kubenswrapper[7385]: I0319 09:18:24.450682 7385 kubelet.go:324] "Adding apiserver pod source" Mar 19 09:18:24.450738 master-0 kubenswrapper[7385]: I0319 09:18:24.450719 7385 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 09:18:24.476708 master-0 kubenswrapper[7385]: I0319 09:18:24.476618 7385 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 19 09:18:24.476889 master-0 kubenswrapper[7385]: I0319 09:18:24.476874 7385 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 19 09:18:24.477253 master-0 kubenswrapper[7385]: I0319 09:18:24.477226 7385 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 09:18:24.477388 master-0 kubenswrapper[7385]: I0319 09:18:24.477369 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477390 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477399 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477406 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477412 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477418 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477424 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477430 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477442 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 19 09:18:24.477442 master-0 kubenswrapper[7385]: I0319 09:18:24.477448 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 19 09:18:24.477788 master-0 kubenswrapper[7385]: I0319 09:18:24.477457 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 19 09:18:24.477788 master-0 kubenswrapper[7385]: I0319 09:18:24.477474 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 19 09:18:24.477788 master-0 kubenswrapper[7385]: I0319 09:18:24.477613 7385 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 19 09:18:24.483793 master-0 kubenswrapper[7385]: I0319 09:18:24.478148 7385 server.go:1280] "Started kubelet" Mar 19 09:18:24.483793 master-0 kubenswrapper[7385]: I0319 09:18:24.478228 7385 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 09:18:24.483793 master-0 kubenswrapper[7385]: I0319 09:18:24.478287 7385 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 19 09:18:24.483793 master-0 kubenswrapper[7385]: I0319 09:18:24.478754 7385 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 09:18:24.483793 master-0 kubenswrapper[7385]: I0319 09:18:24.479245 7385 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 09:18:24.483793 master-0 kubenswrapper[7385]: I0319 09:18:24.481911 7385 server.go:449] "Adding debug handlers to kubelet server" Mar 19 09:18:24.479434 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 19 09:18:24.485895 master-0 kubenswrapper[7385]: I0319 09:18:24.485843 7385 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 19 09:18:24.485895 master-0 kubenswrapper[7385]: I0319 09:18:24.485897 7385 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 09:18:24.486085 master-0 kubenswrapper[7385]: I0319 09:18:24.486021 7385 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 09:09:07 +0000 UTC, rotation deadline is 2026-03-20 03:21:33.304353622 +0000 UTC Mar 19 09:18:24.486085 master-0 kubenswrapper[7385]: I0319 09:18:24.486073 7385 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h3m8.818283785s for next certificate rotation Mar 19 09:18:24.486162 master-0 kubenswrapper[7385]: I0319 09:18:24.486146 7385 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 19 09:18:24.486162 master-0 kubenswrapper[7385]: I0319 09:18:24.486160 7385 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 09:18:24.486249 master-0 kubenswrapper[7385]: E0319 09:18:24.486202 7385 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 19 09:18:24.486249 master-0 kubenswrapper[7385]: I0319 09:18:24.486211 7385 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 19 09:18:24.489325 master-0 kubenswrapper[7385]: I0319 09:18:24.489277 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489323 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676f4062-ea34-48d0-80d7-3cd3d9da341e" volumeName="kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489374 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6cd2eac-6412-4f38-8272-743c67b218a3" volumeName="kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489386 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" volumeName="kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489396 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489406 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489418 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" volumeName="kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489428 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489440 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489450 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53bff8e4-bf60-4386-8905-49d43fd6c420" volumeName="kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489461 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70258988-8374-4aee-aaa2-be3c2e853062" volumeName="kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489472 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c" volumeName="kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc" seLinuxMountContext="" Mar 19 09:18:24.489471 master-0 kubenswrapper[7385]: I0319 09:18:24.489481 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489494 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cc190d-5647-40a1-bfe9-5355bcb33b10" volumeName="kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489504 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" volumeName="kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489512 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489520 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489528 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fca1a4-4fa7-4a43-b9c4-7f50a8737643" volumeName="kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489554 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d66c30b6-67ad-4864-8b51-0424d462ac98" volumeName="kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489563 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489572 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489580 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" volumeName="kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489589 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c222998f-6211-4466-8ad7-5d9fcfb10789" volumeName="kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489598 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e25a16f3-dfe0-49c5-a31d-e310d369f406" volumeName="kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489606 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" volumeName="kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489615 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff5aeea-f859-4e38-bf1c-9e730025c212" volumeName="kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489626 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" volumeName="kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489640 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489653 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58fbf09a-3a26-45ab-8496-11d05c27e9cf" volumeName="kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489669 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="672ad0aa-a0c5-4640-840d-3ffa02c55d62" volumeName="kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489681 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489692 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" volumeName="kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489714 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6cd2eac-6412-4f38-8272-743c67b218a3" volumeName="kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489723 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" volumeName="kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489731 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" volumeName="kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489740 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c631c1-277e-47d2-9377-a0bbd14673d4" volumeName="kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489751 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489761 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489773 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489942 7385 factory.go:55] Registering systemd factory Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.489959 7385 factory.go:221] Registration of the systemd container factory successfully Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490126 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490147 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490157 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" volumeName="kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490165 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53bff8e4-bf60-4386-8905-49d43fd6c420" volumeName="kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490187 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490197 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676f4062-ea34-48d0-80d7-3cd3d9da341e" volumeName="kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490209 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c222998f-6211-4466-8ad7-5d9fcfb10789" volumeName="kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490220 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70258988-8374-4aee-aaa2-be3c2e853062" volumeName="kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490234 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a67ae8dc-240d-4708-9139-1d49c601e552" volumeName="kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490243 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a67ae8dc-240d-4708-9139-1d49c601e552" volumeName="kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490254 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" volumeName="kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490264 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cc190d-5647-40a1-bfe9-5355bcb33b10" volumeName="kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490309 7385 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490349 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17e0cb4a-e776-4886-927e-ae446af7f234" volumeName="kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490373 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490390 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490403 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d66c30b6-67ad-4864-8b51-0424d462ac98" volumeName="kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490420 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a67ae8dc-240d-4708-9139-1d49c601e552" volumeName="kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490432 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" volumeName="kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490442 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d66c30b6-67ad-4864-8b51-0424d462ac98" volumeName="kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490452 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" volumeName="kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490461 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490470 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17e0cb4a-e776-4886-927e-ae446af7f234" volumeName="kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490478 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45523224-f530-4354-90de-7fd65a1a3911" volumeName="kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg" seLinuxMountContext="" Mar 19 09:18:24.490602 master-0 kubenswrapper[7385]: I0319 09:18:24.490475 7385 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.490924 7385 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.490486 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53bff8e4-bf60-4386-8905-49d43fd6c420" volumeName="kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491007 7385 factory.go:153] Registering CRI-O factory Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491026 7385 factory.go:221] Registration of the crio container factory successfully Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.490997 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c222998f-6211-4466-8ad7-5d9fcfb10789" volumeName="kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491077 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491103 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491123 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491136 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70258988-8374-4aee-aaa2-be3c2e853062" volumeName="kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491152 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491162 7385 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491189 7385 factory.go:103] Registering Raw factory Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491167 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491208 7385 manager.go:1196] Started watching for new ooms in manager Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491219 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cc190d-5647-40a1-bfe9-5355bcb33b10" volumeName="kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491236 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="525b41b5-82d8-4d47-8350-79644a2c9360" volumeName="kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491246 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" volumeName="kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491255 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c" volumeName="kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491264 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491272 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58fbf09a-3a26-45ab-8496-11d05c27e9cf" volumeName="kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491280 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491289 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" volumeName="kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491298 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491307 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" volumeName="kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491333 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491342 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="211d123b-829c-49dd-b119-e172cab607cf" volumeName="kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491352 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3816f149-ddce-41c8-a540-fe866ee71c5e" volumeName="kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491360 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" volumeName="kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491369 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491379 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="525b41b5-82d8-4d47-8350-79644a2c9360" volumeName="kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491388 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491398 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491406 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c631c1-277e-47d2-9377-a0bbd14673d4" volumeName="kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491413 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17e0cb4a-e776-4886-927e-ae446af7f234" volumeName="kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491422 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" volumeName="kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491432 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491440 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491448 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491458 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="672ad0aa-a0c5-4640-840d-3ffa02c55d62" volumeName="kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491468 7385 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6cd2eac-6412-4f38-8272-743c67b218a3" volumeName="kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26" seLinuxMountContext="" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491477 7385 reconstruct.go:97] "Volume reconstruction finished" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491483 7385 reconciler.go:26] "Reconciler: start to sync state" Mar 19 09:18:24.493505 master-0 kubenswrapper[7385]: I0319 09:18:24.491649 7385 manager.go:319] Starting recovery of all containers Mar 19 09:18:24.496352 master-0 kubenswrapper[7385]: I0319 09:18:24.496322 7385 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 19 09:18:24.526795 master-0 kubenswrapper[7385]: I0319 09:18:24.526734 7385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 09:18:24.528618 master-0 kubenswrapper[7385]: I0319 09:18:24.528585 7385 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 09:18:24.528680 master-0 kubenswrapper[7385]: I0319 09:18:24.528627 7385 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 09:18:24.528680 master-0 kubenswrapper[7385]: I0319 09:18:24.528649 7385 kubelet.go:2335] "Starting kubelet main sync loop" Mar 19 09:18:24.528775 master-0 kubenswrapper[7385]: E0319 09:18:24.528694 7385 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 09:18:24.530510 master-0 kubenswrapper[7385]: I0319 09:18:24.530249 7385 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 09:18:24.536665 master-0 kubenswrapper[7385]: I0319 09:18:24.535340 7385 generic.go:334] "Generic (PLEG): container finished" podID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerID="4f7ae82c42fcdc2525bbc875f58985f627c3385f9956bdf7d697087dac6e3a2f" exitCode=0 Mar 19 09:18:24.539817 master-0 kubenswrapper[7385]: I0319 09:18:24.539761 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 09:18:24.540250 master-0 kubenswrapper[7385]: I0319 09:18:24.540211 7385 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644" exitCode=1 Mar 19 09:18:24.540322 master-0 kubenswrapper[7385]: I0319 09:18:24.540250 7385 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="b213f6d8da0d4384e45f89c17fb5962fd352a3cea0a7f3f8261c476ba746dbca" exitCode=0 Mar 19 09:18:24.556155 master-0 kubenswrapper[7385]: I0319 09:18:24.556110 7385 generic.go:334] "Generic (PLEG): container finished" podID="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" containerID="cfed02ef0a3bee4084b5a5748407cbaeafff5b6fc759f0c7f9bdc76ec5af9ce1" exitCode=0 Mar 19 09:18:24.557375 master-0 kubenswrapper[7385]: I0319 09:18:24.557347 7385 generic.go:334] "Generic (PLEG): container finished" podID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerID="e2912f5a07027e593c03c831722de1c74b974cbf7fe0986009830ada22289435" exitCode=0 Mar 19 09:18:24.567796 master-0 kubenswrapper[7385]: I0319 09:18:24.567755 7385 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="00add47a2cdec59c3ac383946429a4dc013519a6933bbb0d7ebdd58eb0eb7186" exitCode=0 Mar 19 09:18:24.576762 master-0 kubenswrapper[7385]: I0319 09:18:24.576730 7385 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="c8bff62b4e05425e80c7e14b2ad4d089fe60c7b7e27feb3cfc2b1fde8c062902" exitCode=0 Mar 19 09:18:24.576762 master-0 kubenswrapper[7385]: I0319 09:18:24.576756 7385 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="58f2d035e725f793e501aa00d5cd6dec60187d755b95ed0332885f977a2d1232" exitCode=0 Mar 19 09:18:24.576762 master-0 kubenswrapper[7385]: I0319 09:18:24.576767 7385 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="86d7bf6f8a152beed53ca9a59153f0d5628c8aeeca38c4e7133940d1c9f346af" exitCode=0 Mar 19 09:18:24.576902 master-0 kubenswrapper[7385]: I0319 09:18:24.576775 7385 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="20b5e36de175a38e8938a8e709cd8fa1a5177137ac9ceff4b103028234492d38" exitCode=0 Mar 19 09:18:24.576902 master-0 kubenswrapper[7385]: I0319 09:18:24.576783 7385 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="575ffdeb036bb96884333ecfd381cd08c10d745628010252b611aaa18d03bb88" exitCode=0 Mar 19 09:18:24.576902 master-0 kubenswrapper[7385]: I0319 09:18:24.576790 7385 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="be5668fe1c571dde1e396c091e4c7ec37d88531f9ac3613886b71274efe031c6" exitCode=0 Mar 19 09:18:24.604479 master-0 kubenswrapper[7385]: I0319 09:18:24.604444 7385 manager.go:324] Recovery completed Mar 19 09:18:24.628860 master-0 kubenswrapper[7385]: E0319 09:18:24.628796 7385 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 09:18:24.633319 master-0 kubenswrapper[7385]: I0319 09:18:24.633289 7385 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 19 09:18:24.633319 master-0 kubenswrapper[7385]: I0319 09:18:24.633310 7385 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 19 09:18:24.633406 master-0 kubenswrapper[7385]: I0319 09:18:24.633327 7385 state_mem.go:36] "Initialized new in-memory state store" Mar 19 09:18:24.633506 master-0 kubenswrapper[7385]: I0319 09:18:24.633474 7385 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 09:18:24.633561 master-0 kubenswrapper[7385]: I0319 09:18:24.633499 7385 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 09:18:24.633561 master-0 kubenswrapper[7385]: I0319 09:18:24.633519 7385 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 19 09:18:24.633561 master-0 kubenswrapper[7385]: I0319 09:18:24.633525 7385 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 19 09:18:24.633561 master-0 kubenswrapper[7385]: I0319 09:18:24.633532 7385 policy_none.go:49] "None policy: Start" Mar 19 09:18:24.634651 master-0 kubenswrapper[7385]: I0319 09:18:24.634622 7385 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 09:18:24.634651 master-0 kubenswrapper[7385]: I0319 09:18:24.634649 7385 state_mem.go:35] "Initializing new in-memory state store" Mar 19 09:18:24.634823 master-0 kubenswrapper[7385]: I0319 09:18:24.634798 7385 state_mem.go:75] "Updated machine memory state" Mar 19 09:18:24.634823 master-0 kubenswrapper[7385]: I0319 09:18:24.634814 7385 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 19 09:18:24.642473 master-0 kubenswrapper[7385]: I0319 09:18:24.642454 7385 manager.go:334] "Starting Device Plugin manager" Mar 19 09:18:24.642622 master-0 kubenswrapper[7385]: I0319 09:18:24.642598 7385 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 09:18:24.642622 master-0 kubenswrapper[7385]: I0319 09:18:24.642614 7385 server.go:79] "Starting device plugin registration server" Mar 19 09:18:24.642903 master-0 kubenswrapper[7385]: I0319 09:18:24.642888 7385 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 09:18:24.642939 master-0 kubenswrapper[7385]: I0319 09:18:24.642903 7385 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 09:18:24.643030 master-0 kubenswrapper[7385]: I0319 09:18:24.643013 7385 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 19 09:18:24.643094 master-0 kubenswrapper[7385]: I0319 09:18:24.643078 7385 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 19 09:18:24.643094 master-0 kubenswrapper[7385]: I0319 09:18:24.643089 7385 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 09:18:24.743736 master-0 kubenswrapper[7385]: I0319 09:18:24.743623 7385 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:18:24.745392 master-0 kubenswrapper[7385]: I0319 09:18:24.745356 7385 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:18:24.745445 master-0 kubenswrapper[7385]: I0319 09:18:24.745406 7385 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:18:24.745445 master-0 kubenswrapper[7385]: I0319 09:18:24.745418 7385 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:18:24.745504 master-0 kubenswrapper[7385]: I0319 09:18:24.745472 7385 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:18:24.756931 master-0 kubenswrapper[7385]: I0319 09:18:24.756897 7385 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 19 09:18:24.757033 master-0 kubenswrapper[7385]: I0319 09:18:24.757014 7385 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 19 09:18:24.830080 master-0 kubenswrapper[7385]: I0319 09:18:24.829975 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 09:18:24.830466 master-0 kubenswrapper[7385]: I0319 09:18:24.830420 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650" Mar 19 09:18:24.830563 master-0 kubenswrapper[7385]: I0319 09:18:24.830452 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"9df04d5fbcf74c680b5a31ee14b15b95259c81da87f4ed60f22768d81cdac068"} Mar 19 09:18:24.830606 master-0 kubenswrapper[7385]: I0319 09:18:24.830535 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644"} Mar 19 09:18:24.830606 master-0 kubenswrapper[7385]: I0319 09:18:24.830590 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"b213f6d8da0d4384e45f89c17fb5962fd352a3cea0a7f3f8261c476ba746dbca"} Mar 19 09:18:24.830696 master-0 kubenswrapper[7385]: I0319 09:18:24.830606 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8"} Mar 19 09:18:24.830696 master-0 kubenswrapper[7385]: I0319 09:18:24.830625 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"0fa64701f5e06185b54d04000e8eff35b5351d75655dd3a6eb6ffaa3f06a93bd"} Mar 19 09:18:24.830696 master-0 kubenswrapper[7385]: I0319 09:18:24.830640 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6"} Mar 19 09:18:24.830696 master-0 kubenswrapper[7385]: I0319 09:18:24.830656 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"45039086b1bdf7c8b135828088ebf13ff393c5333b5272f1cf3328f195ddea5b"} Mar 19 09:18:24.830696 master-0 kubenswrapper[7385]: I0319 09:18:24.830671 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"e936e2d314dab9154842440cf41e00874f26fcc073cf860d24367374f28b489d"} Mar 19 09:18:24.830696 master-0 kubenswrapper[7385]: I0319 09:18:24.830686 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2"} Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830756 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9fbbe858fe11080717c6b2043df87705b18427f15663b4d039cae1dd0e63eb" Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830778 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee54fa7ada4c624d77b0e2f3dcbdb8c8d02973745fe65c2d05740e42c92ec9e" Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830791 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"4ff4b935126cc5d750c1d850d7bd8bc2f70fd6fa92c703e7c39a069db8572af3"} Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830808 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"0ea74be9ce6a8db82cc76cb8b1abbace62eee2a97494f9a8b0c0af4311285f49"} Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830828 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"00add47a2cdec59c3ac383946429a4dc013519a6933bbb0d7ebdd58eb0eb7186"} Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830844 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611"} Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830889 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1"} Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830905 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc"} Mar 19 09:18:24.830935 master-0 kubenswrapper[7385]: I0319 09:18:24.830920 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9495b204f939cfe309fc2de424fddb422399f1a686d2e065d4ed70b1caf63a00"} Mar 19 09:18:24.843527 master-0 kubenswrapper[7385]: W0319 09:18:24.843499 7385 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 19 09:18:24.843651 master-0 kubenswrapper[7385]: E0319 09:18:24.843537 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:18:24.843651 master-0 kubenswrapper[7385]: E0319 09:18:24.843582 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:18:24.843907 master-0 kubenswrapper[7385]: E0319 09:18:24.843884 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.849643 master-0 kubenswrapper[7385]: E0319 09:18:24.849616 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:18:24.850258 master-0 kubenswrapper[7385]: E0319 09:18:24.850238 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898646 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898684 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898710 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898760 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898831 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898885 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898914 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898929 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898964 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.898990 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.899012 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:18:24.899011 master-0 kubenswrapper[7385]: I0319 09:18:24.899028 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.899625 master-0 kubenswrapper[7385]: I0319 09:18:24.899055 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.899625 master-0 kubenswrapper[7385]: I0319 09:18:24.899070 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.899625 master-0 kubenswrapper[7385]: I0319 09:18:24.899085 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.899625 master-0 kubenswrapper[7385]: I0319 09:18:24.899123 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.899625 master-0 kubenswrapper[7385]: I0319 09:18:24.899156 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:18:24.999462 master-0 kubenswrapper[7385]: I0319 09:18:24.999327 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.999659 master-0 kubenswrapper[7385]: I0319 09:18:24.999455 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.999659 master-0 kubenswrapper[7385]: I0319 09:18:24.999581 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.999659 master-0 kubenswrapper[7385]: I0319 09:18:24.999617 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:18:24.999659 master-0 kubenswrapper[7385]: I0319 09:18:24.999638 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999666 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999688 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999725 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999752 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999774 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999795 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999817 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:18:24.999851 master-0 kubenswrapper[7385]: I0319 09:18:24.999839 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:24.999862 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:24.999938 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:24.999969 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:24.999991 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:25.000013 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:25.000049 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:25.000084 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:25.000118 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:18:25.000146 master-0 kubenswrapper[7385]: I0319 09:18:25.000151 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000188 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000220 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000250 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000284 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000319 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000380 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000416 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000447 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000496 master-0 kubenswrapper[7385]: I0319 09:18:25.000483 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000831 master-0 kubenswrapper[7385]: I0319 09:18:25.000525 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:25.000831 master-0 kubenswrapper[7385]: I0319 09:18:25.000580 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:18:25.000831 master-0 kubenswrapper[7385]: I0319 09:18:25.000613 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:25.452141 master-0 kubenswrapper[7385]: I0319 09:18:25.451847 7385 apiserver.go:52] "Watching apiserver" Mar 19 09:18:25.463084 master-0 kubenswrapper[7385]: I0319 09:18:25.463040 7385 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 09:18:25.464353 master-0 kubenswrapper[7385]: I0319 09:18:25.464320 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7bd846bfc4-gkvf5","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m","openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw","openshift-multus/multus-additional-cni-plugins-jzj4h","openshift-network-diagnostics/network-check-target-lql9l","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt","openshift-etcd/etcd-master-0-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-operator/iptables-alerter-p9bbz","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt","openshift-multus/multus-8pt59","openshift-multus/network-metrics-daemon-lflg7","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5","openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd","openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj","openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb","openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm","openshift-network-node-identity/network-node-identity-t7zwh","openshift-ovn-kubernetes/ovnkube-node-zmrpw","openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz","openshift-dns-operator/dns-operator-9c5679d8f-k89rz","openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65","assisted-installer/assisted-installer-controller-kw4xv","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x","openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5","openshift-insights/insights-operator-68bf6ff9d6-h4zrl","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf","openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk","openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd","openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc","openshift-marketplace/marketplace-operator-89ccd998f-stct6"] Mar 19 09:18:25.464787 master-0 kubenswrapper[7385]: I0319 09:18:25.464522 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:18:25.464787 master-0 kubenswrapper[7385]: I0319 09:18:25.464744 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:25.465788 master-0 kubenswrapper[7385]: I0319 09:18:25.464931 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.465788 master-0 kubenswrapper[7385]: I0319 09:18:25.465625 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.466030 master-0 kubenswrapper[7385]: I0319 09:18:25.465975 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:25.466241 master-0 kubenswrapper[7385]: I0319 09:18:25.466207 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:25.467668 master-0 kubenswrapper[7385]: I0319 09:18:25.467137 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 09:18:25.467668 master-0 kubenswrapper[7385]: I0319 09:18:25.467516 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.467668 master-0 kubenswrapper[7385]: I0319 09:18:25.467536 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:25.468947 master-0 kubenswrapper[7385]: I0319 09:18:25.468913 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 09:18:25.469116 master-0 kubenswrapper[7385]: I0319 09:18:25.469081 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 09:18:25.470498 master-0 kubenswrapper[7385]: I0319 09:18:25.470156 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:25.470888 master-0 kubenswrapper[7385]: I0319 09:18:25.470846 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:25.471783 master-0 kubenswrapper[7385]: I0319 09:18:25.471749 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:25.472171 master-0 kubenswrapper[7385]: I0319 09:18:25.472035 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.472752 master-0 kubenswrapper[7385]: I0319 09:18:25.472705 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.474451 master-0 kubenswrapper[7385]: I0319 09:18:25.474426 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:25.474523 master-0 kubenswrapper[7385]: I0319 09:18:25.474475 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:25.474639 master-0 kubenswrapper[7385]: I0319 09:18:25.474611 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 09:18:25.475380 master-0 kubenswrapper[7385]: I0319 09:18:25.475050 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:25.475380 master-0 kubenswrapper[7385]: I0319 09:18:25.475192 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 09:18:25.475380 master-0 kubenswrapper[7385]: I0319 09:18:25.475273 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 09:18:25.475380 master-0 kubenswrapper[7385]: I0319 09:18:25.475312 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 09:18:25.475655 master-0 kubenswrapper[7385]: I0319 09:18:25.475419 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 09:18:25.475906 master-0 kubenswrapper[7385]: I0319 09:18:25.475839 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 09:18:25.475906 master-0 kubenswrapper[7385]: I0319 09:18:25.475842 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 09:18:25.476226 master-0 kubenswrapper[7385]: I0319 09:18:25.476171 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.485643 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.485972 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.486202 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.486592 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.486874 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.488707 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.488855 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.489150 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 09:18:25.489494 master-0 kubenswrapper[7385]: I0319 09:18:25.489469 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 09:18:25.489911 master-0 kubenswrapper[7385]: I0319 09:18:25.489525 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 09:18:25.489911 master-0 kubenswrapper[7385]: I0319 09:18:25.489583 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 09:18:25.490698 master-0 kubenswrapper[7385]: I0319 09:18:25.490676 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 09:18:25.490813 master-0 kubenswrapper[7385]: I0319 09:18:25.490792 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.490949 master-0 kubenswrapper[7385]: I0319 09:18:25.490899 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 09:18:25.491092 master-0 kubenswrapper[7385]: I0319 09:18:25.491071 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491149 master-0 kubenswrapper[7385]: I0319 09:18:25.491119 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 09:18:25.491183 master-0 kubenswrapper[7385]: I0319 09:18:25.491152 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 09:18:25.491183 master-0 kubenswrapper[7385]: I0319 09:18:25.491162 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 09:18:25.491183 master-0 kubenswrapper[7385]: I0319 09:18:25.491176 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 09:18:25.491295 master-0 kubenswrapper[7385]: I0319 09:18:25.491235 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 09:18:25.491295 master-0 kubenswrapper[7385]: I0319 09:18:25.491260 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 09:18:25.491295 master-0 kubenswrapper[7385]: I0319 09:18:25.491286 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491377 master-0 kubenswrapper[7385]: I0319 09:18:25.491301 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491377 master-0 kubenswrapper[7385]: I0319 09:18:25.491345 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 09:18:25.491377 master-0 kubenswrapper[7385]: I0319 09:18:25.491364 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 09:18:25.491463 master-0 kubenswrapper[7385]: I0319 09:18:25.491381 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 09:18:25.491463 master-0 kubenswrapper[7385]: I0319 09:18:25.491418 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 09:18:25.491463 master-0 kubenswrapper[7385]: I0319 09:18:25.491430 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491463 master-0 kubenswrapper[7385]: I0319 09:18:25.491439 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491463 master-0 kubenswrapper[7385]: I0319 09:18:25.491449 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 09:18:25.491618 master-0 kubenswrapper[7385]: I0319 09:18:25.491496 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491618 master-0 kubenswrapper[7385]: I0319 09:18:25.491505 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 09:18:25.491618 master-0 kubenswrapper[7385]: I0319 09:18:25.491523 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 09:18:25.491618 master-0 kubenswrapper[7385]: I0319 09:18:25.491536 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 09:18:25.491618 master-0 kubenswrapper[7385]: I0319 09:18:25.491561 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491618 master-0 kubenswrapper[7385]: I0319 09:18:25.491611 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491780 master-0 kubenswrapper[7385]: I0319 09:18:25.491633 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 09:18:25.491780 master-0 kubenswrapper[7385]: I0319 09:18:25.491639 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 09:18:25.491780 master-0 kubenswrapper[7385]: I0319 09:18:25.491670 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 09:18:25.491780 master-0 kubenswrapper[7385]: I0319 09:18:25.491698 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 09:18:25.491780 master-0 kubenswrapper[7385]: I0319 09:18:25.491501 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.491898 master-0 kubenswrapper[7385]: I0319 09:18:25.491781 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 09:18:25.491898 master-0 kubenswrapper[7385]: I0319 09:18:25.491615 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:18:25.491898 master-0 kubenswrapper[7385]: I0319 09:18:25.491811 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 09:18:25.491898 master-0 kubenswrapper[7385]: I0319 09:18:25.491821 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 09:18:25.491898 master-0 kubenswrapper[7385]: I0319 09:18:25.491833 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 09:18:25.491898 master-0 kubenswrapper[7385]: I0319 09:18:25.491869 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.491908 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.491618 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.491936 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.491782 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.491785 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.491882 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.492009 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 09:18:25.492042 master-0 kubenswrapper[7385]: I0319 09:18:25.491981 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 09:18:25.492237 master-0 kubenswrapper[7385]: I0319 09:18:25.492074 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 09:18:25.492237 master-0 kubenswrapper[7385]: I0319 09:18:25.492087 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 09:18:25.492237 master-0 kubenswrapper[7385]: I0319 09:18:25.492127 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 09:18:25.492237 master-0 kubenswrapper[7385]: I0319 09:18:25.492139 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 09:18:25.492237 master-0 kubenswrapper[7385]: I0319 09:18:25.492164 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 09:18:25.492237 master-0 kubenswrapper[7385]: I0319 09:18:25.492210 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 09:18:25.492237 master-0 kubenswrapper[7385]: I0319 09:18:25.492218 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 09:18:25.492507 master-0 kubenswrapper[7385]: I0319 09:18:25.492130 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 09:18:25.492507 master-0 kubenswrapper[7385]: I0319 09:18:25.492445 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 09:18:25.492507 master-0 kubenswrapper[7385]: I0319 09:18:25.492445 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 09:18:25.492507 master-0 kubenswrapper[7385]: I0319 09:18:25.492395 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 09:18:25.492668 master-0 kubenswrapper[7385]: I0319 09:18:25.492578 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 09:18:25.492818 master-0 kubenswrapper[7385]: I0319 09:18:25.492790 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493096 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493137 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493160 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493233 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493241 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493285 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493342 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493369 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493436 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493526 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 09:18:25.493808 master-0 kubenswrapper[7385]: I0319 09:18:25.493603 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 09:18:25.495774 master-0 kubenswrapper[7385]: I0319 09:18:25.495755 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 09:18:25.495957 master-0 kubenswrapper[7385]: I0319 09:18:25.495842 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:18:25.496915 master-0 kubenswrapper[7385]: I0319 09:18:25.496866 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 09:18:25.500837 master-0 kubenswrapper[7385]: I0319 09:18:25.500808 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 09:18:25.502712 master-0 kubenswrapper[7385]: I0319 09:18:25.502689 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 09:18:25.504815 master-0 kubenswrapper[7385]: I0319 09:18:25.504788 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.504884 master-0 kubenswrapper[7385]: I0319 09:18:25.504826 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.504884 master-0 kubenswrapper[7385]: I0319 09:18:25.504846 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:25.504884 master-0 kubenswrapper[7385]: I0319 09:18:25.504865 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.504884 master-0 kubenswrapper[7385]: I0319 09:18:25.504881 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7plsz\" (UniqueName: \"kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.504898 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.504916 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5fk\" (UniqueName: \"kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.504933 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.504950 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.504965 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.504982 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.504995 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.505014 master-0 kubenswrapper[7385]: I0319 09:18:25.505010 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505027 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdvd\" (UniqueName: \"kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505042 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505060 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svkc\" (UniqueName: \"kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505075 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbzvl\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505091 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505108 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505124 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505139 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505157 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505171 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505196 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:25.505208 master-0 kubenswrapper[7385]: I0319 09:18:25.505213 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8cg\" (UniqueName: \"kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505232 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7rj\" (UniqueName: \"kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505253 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505269 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505285 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505301 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505323 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505340 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505357 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505385 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.505609 master-0 kubenswrapper[7385]: I0319 09:18:25.505401 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8bm4\" (UniqueName: \"kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.505884 master-0 kubenswrapper[7385]: I0319 09:18:25.505856 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.506150 master-0 kubenswrapper[7385]: I0319 09:18:25.506120 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnjq\" (UniqueName: \"kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.506201 master-0 kubenswrapper[7385]: I0319 09:18:25.506169 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85vjd\" (UniqueName: \"kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:25.506201 master-0 kubenswrapper[7385]: I0319 09:18:25.506200 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:25.506343 master-0 kubenswrapper[7385]: I0319 09:18:25.506309 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 09:18:25.506705 master-0 kubenswrapper[7385]: I0319 09:18:25.506688 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 09:18:25.506801 master-0 kubenswrapper[7385]: I0319 09:18:25.506776 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:25.506801 master-0 kubenswrapper[7385]: I0319 09:18:25.506775 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:25.506899 master-0 kubenswrapper[7385]: I0319 09:18:25.506875 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.506936 master-0 kubenswrapper[7385]: I0319 09:18:25.506885 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.506975 master-0 kubenswrapper[7385]: I0319 09:18:25.506949 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.507005 master-0 kubenswrapper[7385]: I0319 09:18:25.506974 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.507037 master-0 kubenswrapper[7385]: I0319 09:18:25.506822 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:25.507037 master-0 kubenswrapper[7385]: I0319 09:18:25.507024 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnp7\" (UniqueName: \"kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.507093 master-0 kubenswrapper[7385]: I0319 09:18:25.507059 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll8k\" (UniqueName: \"kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:25.507093 master-0 kubenswrapper[7385]: I0319 09:18:25.507060 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.507168 master-0 kubenswrapper[7385]: I0319 09:18:25.507118 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:25.507243 master-0 kubenswrapper[7385]: I0319 09:18:25.507213 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.507291 master-0 kubenswrapper[7385]: I0319 09:18:25.507250 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:25.507291 master-0 kubenswrapper[7385]: I0319 09:18:25.507257 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:25.507291 master-0 kubenswrapper[7385]: I0319 09:18:25.507277 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.507370 master-0 kubenswrapper[7385]: I0319 09:18:25.507303 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.507370 master-0 kubenswrapper[7385]: I0319 09:18:25.507326 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.507370 master-0 kubenswrapper[7385]: I0319 09:18:25.507349 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.507370 master-0 kubenswrapper[7385]: I0319 09:18:25.507356 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.507370 master-0 kubenswrapper[7385]: I0319 09:18:25.507369 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.507492 master-0 kubenswrapper[7385]: I0319 09:18:25.507382 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.507492 master-0 kubenswrapper[7385]: I0319 09:18:25.507395 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjhk\" (UniqueName: \"kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:25.507492 master-0 kubenswrapper[7385]: I0319 09:18:25.507408 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:25.507492 master-0 kubenswrapper[7385]: I0319 09:18:25.507421 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hccqk\" (UniqueName: \"kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:25.507492 master-0 kubenswrapper[7385]: I0319 09:18:25.507444 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.507492 master-0 kubenswrapper[7385]: I0319 09:18:25.507469 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.507492 master-0 kubenswrapper[7385]: I0319 09:18:25.507492 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.507739 master-0 kubenswrapper[7385]: I0319 09:18:25.507515 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:25.507739 master-0 kubenswrapper[7385]: I0319 09:18:25.507521 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.507739 master-0 kubenswrapper[7385]: I0319 09:18:25.507536 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.508891 master-0 kubenswrapper[7385]: I0319 09:18:25.508864 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.508891 master-0 kubenswrapper[7385]: I0319 09:18:25.508877 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.509001 master-0 kubenswrapper[7385]: I0319 09:18:25.508889 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.509001 master-0 kubenswrapper[7385]: I0319 09:18:25.508933 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:25.509097 master-0 kubenswrapper[7385]: I0319 09:18:25.509068 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:25.509503 master-0 kubenswrapper[7385]: I0319 09:18:25.509457 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 09:18:25.510127 master-0 kubenswrapper[7385]: I0319 09:18:25.510089 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.510208 master-0 kubenswrapper[7385]: I0319 09:18:25.510137 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.510292 master-0 kubenswrapper[7385]: I0319 09:18:25.510179 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4n26\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:25.510342 master-0 kubenswrapper[7385]: I0319 09:18:25.510301 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.510342 master-0 kubenswrapper[7385]: I0319 09:18:25.510336 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.510462 master-0 kubenswrapper[7385]: I0319 09:18:25.510363 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.510522 master-0 kubenswrapper[7385]: I0319 09:18:25.510476 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:25.510522 master-0 kubenswrapper[7385]: I0319 09:18:25.510513 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7thvr\" (UniqueName: \"kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:25.510698 master-0 kubenswrapper[7385]: I0319 09:18:25.510661 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.510760 master-0 kubenswrapper[7385]: I0319 09:18:25.510578 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.510760 master-0 kubenswrapper[7385]: I0319 09:18:25.510731 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:25.510858 master-0 kubenswrapper[7385]: I0319 09:18:25.510788 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmdk\" (UniqueName: \"kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.510960 master-0 kubenswrapper[7385]: I0319 09:18:25.510826 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.511024 master-0 kubenswrapper[7385]: I0319 09:18:25.510966 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.511024 master-0 kubenswrapper[7385]: I0319 09:18:25.510982 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:25.511024 master-0 kubenswrapper[7385]: I0319 09:18:25.511012 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:25.511152 master-0 kubenswrapper[7385]: I0319 09:18:25.511043 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:25.511202 master-0 kubenswrapper[7385]: I0319 09:18:25.511164 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpcnv\" (UniqueName: \"kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:25.511202 master-0 kubenswrapper[7385]: I0319 09:18:25.511198 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt99t\" (UniqueName: \"kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:25.511282 master-0 kubenswrapper[7385]: I0319 09:18:25.511224 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.511444 master-0 kubenswrapper[7385]: I0319 09:18:25.511411 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.511499 master-0 kubenswrapper[7385]: I0319 09:18:25.511458 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:25.511499 master-0 kubenswrapper[7385]: I0319 09:18:25.511493 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:25.511616 master-0 kubenswrapper[7385]: I0319 09:18:25.511524 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:18:25.511616 master-0 kubenswrapper[7385]: I0319 09:18:25.511539 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:25.511704 master-0 kubenswrapper[7385]: I0319 09:18:25.511683 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:25.511816 master-0 kubenswrapper[7385]: I0319 09:18:25.511770 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.511868 master-0 kubenswrapper[7385]: I0319 09:18:25.511828 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.511868 master-0 kubenswrapper[7385]: I0319 09:18:25.511850 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c654s\" (UniqueName: \"kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:25.511868 master-0 kubenswrapper[7385]: I0319 09:18:25.511860 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:25.511987 master-0 kubenswrapper[7385]: I0319 09:18:25.511874 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.511987 master-0 kubenswrapper[7385]: I0319 09:18:25.511972 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:25.512074 master-0 kubenswrapper[7385]: I0319 09:18:25.512061 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfnn\" (UniqueName: \"kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.512355 master-0 kubenswrapper[7385]: I0319 09:18:25.512318 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:25.512423 master-0 kubenswrapper[7385]: I0319 09:18:25.512367 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:25.512518 master-0 kubenswrapper[7385]: I0319 09:18:25.512487 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:25.512585 master-0 kubenswrapper[7385]: I0319 09:18:25.512569 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:25.512630 master-0 kubenswrapper[7385]: I0319 09:18:25.512593 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.512726 master-0 kubenswrapper[7385]: I0319 09:18:25.512693 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.512776 master-0 kubenswrapper[7385]: I0319 09:18:25.512728 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h925l\" (UniqueName: \"kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:25.512776 master-0 kubenswrapper[7385]: I0319 09:18:25.512729 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:25.512776 master-0 kubenswrapper[7385]: I0319 09:18:25.512745 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.512895 master-0 kubenswrapper[7385]: I0319 09:18:25.512864 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:25.513049 master-0 kubenswrapper[7385]: I0319 09:18:25.512922 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.513049 master-0 kubenswrapper[7385]: I0319 09:18:25.513020 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:25.513231 master-0 kubenswrapper[7385]: I0319 09:18:25.513062 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:25.513287 master-0 kubenswrapper[7385]: I0319 09:18:25.513263 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:25.513341 master-0 kubenswrapper[7385]: I0319 09:18:25.513296 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.513341 master-0 kubenswrapper[7385]: I0319 09:18:25.513324 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.513423 master-0 kubenswrapper[7385]: I0319 09:18:25.513350 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:25.513423 master-0 kubenswrapper[7385]: I0319 09:18:25.513375 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:25.513423 master-0 kubenswrapper[7385]: I0319 09:18:25.513401 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.513569 master-0 kubenswrapper[7385]: I0319 09:18:25.513429 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbw6q\" (UniqueName: \"kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:25.513569 master-0 kubenswrapper[7385]: I0319 09:18:25.513457 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.513569 master-0 kubenswrapper[7385]: I0319 09:18:25.513488 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvtc\" (UniqueName: \"kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:18:25.513569 master-0 kubenswrapper[7385]: I0319 09:18:25.513515 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.513569 master-0 kubenswrapper[7385]: I0319 09:18:25.513263 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.513772 master-0 kubenswrapper[7385]: I0319 09:18:25.513490 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:25.513772 master-0 kubenswrapper[7385]: I0319 09:18:25.513564 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.513772 master-0 kubenswrapper[7385]: I0319 09:18:25.513692 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.515400 master-0 kubenswrapper[7385]: I0319 09:18:25.515360 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.515475 master-0 kubenswrapper[7385]: I0319 09:18:25.515411 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 09:18:25.515571 master-0 kubenswrapper[7385]: I0319 09:18:25.515526 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.515625 master-0 kubenswrapper[7385]: I0319 09:18:25.513769 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:25.515675 master-0 kubenswrapper[7385]: I0319 09:18:25.515623 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.515675 master-0 kubenswrapper[7385]: I0319 09:18:25.515663 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.515752 master-0 kubenswrapper[7385]: I0319 09:18:25.515692 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.515752 master-0 kubenswrapper[7385]: I0319 09:18:25.515729 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 09:18:25.515752 master-0 kubenswrapper[7385]: I0319 09:18:25.515739 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 09:18:25.515857 master-0 kubenswrapper[7385]: I0319 09:18:25.515788 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.515904 master-0 kubenswrapper[7385]: I0319 09:18:25.515889 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.515945 master-0 kubenswrapper[7385]: I0319 09:18:25.515900 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.516035 master-0 kubenswrapper[7385]: I0319 09:18:25.515947 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.516094 master-0 kubenswrapper[7385]: I0319 09:18:25.516073 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:25.516177 master-0 kubenswrapper[7385]: I0319 09:18:25.516153 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.516177 master-0 kubenswrapper[7385]: I0319 09:18:25.516169 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:25.516286 master-0 kubenswrapper[7385]: I0319 09:18:25.516194 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.516286 master-0 kubenswrapper[7385]: I0319 09:18:25.516209 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:25.516286 master-0 kubenswrapper[7385]: I0319 09:18:25.516230 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:25.516286 master-0 kubenswrapper[7385]: I0319 09:18:25.516215 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516169 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516310 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbktm\" (UniqueName: \"kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm\") pod \"csi-snapshot-controller-operator-5f5d689c6b-d89zz\" (UID: \"43fca1a4-4fa7-4a43-b9c4-7f50a8737643\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516336 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516356 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516374 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516420 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516394 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:25.516455 master-0 kubenswrapper[7385]: I0319 09:18:25.516462 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516477 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516478 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516507 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr4bl\" (UniqueName: \"kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516526 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516561 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516578 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516593 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516610 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516689 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516761 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.516833 master-0 kubenswrapper[7385]: I0319 09:18:25.516817 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.516846 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.516855 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.516870 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnxk9\" (UniqueName: \"kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.516893 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.516928 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47plx\" (UniqueName: \"kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.517004 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.517026 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.517074 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:25.517093 master-0 kubenswrapper[7385]: I0319 09:18:25.517091 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517106 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517140 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8b7s\" (UniqueName: \"kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517156 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517172 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517199 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517261 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2hfh\" (UniqueName: \"kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517280 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.517332 master-0 kubenswrapper[7385]: I0319 09:18:25.517329 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.517588 master-0 kubenswrapper[7385]: I0319 09:18:25.517575 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.517731 master-0 kubenswrapper[7385]: I0319 09:18:25.517705 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:25.517765 master-0 kubenswrapper[7385]: I0319 09:18:25.517732 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:25.518145 master-0 kubenswrapper[7385]: I0319 09:18:25.518118 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:25.523517 master-0 kubenswrapper[7385]: I0319 09:18:25.523488 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.523611 master-0 kubenswrapper[7385]: I0319 09:18:25.523533 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:18:25.534290 master-0 kubenswrapper[7385]: I0319 09:18:25.534257 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 09:18:25.537102 master-0 kubenswrapper[7385]: I0319 09:18:25.537067 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.554012 master-0 kubenswrapper[7385]: I0319 09:18:25.553961 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 09:18:25.573674 master-0 kubenswrapper[7385]: I0319 09:18:25.573617 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 09:18:25.581311 master-0 kubenswrapper[7385]: I0319 09:18:25.581275 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.586821 master-0 kubenswrapper[7385]: I0319 09:18:25.586786 7385 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 19 09:18:25.594191 master-0 kubenswrapper[7385]: I0319 09:18:25.594170 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 09:18:25.614182 master-0 kubenswrapper[7385]: I0319 09:18:25.614141 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 09:18:25.617900 master-0 kubenswrapper[7385]: I0319 09:18:25.617854 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:25.617976 master-0 kubenswrapper[7385]: I0319 09:18:25.617923 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.617976 master-0 kubenswrapper[7385]: I0319 09:18:25.617966 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.618150 master-0 kubenswrapper[7385]: I0319 09:18:25.618113 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.618299 master-0 kubenswrapper[7385]: I0319 09:18:25.618250 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.618299 master-0 kubenswrapper[7385]: I0319 09:18:25.618290 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.618381 master-0 kubenswrapper[7385]: I0319 09:18:25.618319 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.618429 master-0 kubenswrapper[7385]: I0319 09:18:25.618383 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.618468 master-0 kubenswrapper[7385]: I0319 09:18:25.618427 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.618468 master-0 kubenswrapper[7385]: I0319 09:18:25.618454 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.618573 master-0 kubenswrapper[7385]: I0319 09:18:25.618534 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:25.618620 master-0 kubenswrapper[7385]: I0319 09:18:25.618582 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.618620 master-0 kubenswrapper[7385]: I0319 09:18:25.618602 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.618705 master-0 kubenswrapper[7385]: E0319 09:18:25.618609 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:25.618705 master-0 kubenswrapper[7385]: I0319 09:18:25.618642 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.618705 master-0 kubenswrapper[7385]: I0319 09:18:25.618663 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.618705 master-0 kubenswrapper[7385]: E0319 09:18:25.618700 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.118677279 +0000 UTC m=+1.793107080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:25.618854 master-0 kubenswrapper[7385]: E0319 09:18:25.618740 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:25.618854 master-0 kubenswrapper[7385]: I0319 09:18:25.618767 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.618854 master-0 kubenswrapper[7385]: I0319 09:18:25.618764 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.618854 master-0 kubenswrapper[7385]: I0319 09:18:25.618794 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.618854 master-0 kubenswrapper[7385]: I0319 09:18:25.618826 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.618854 master-0 kubenswrapper[7385]: E0319 09:18:25.618836 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.118807343 +0000 UTC m=+1.793237124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618871 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618895 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618891 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618702 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618946 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618968 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618975 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: I0319 09:18:25.618841 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.619054 master-0 kubenswrapper[7385]: E0319 09:18:25.619047 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619056 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: E0319 09:18:25.619080 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.11906782 +0000 UTC m=+1.793497641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: E0319 09:18:25.618749 7385 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: E0319 09:18:25.619106 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.119100601 +0000 UTC m=+1.793530292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619124 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: E0319 09:18:25.619142 7385 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: E0319 09:18:25.619161 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.119155483 +0000 UTC m=+1.793585304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619197 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619253 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619275 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619334 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619342 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.619366 master-0 kubenswrapper[7385]: I0319 09:18:25.619372 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619386 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619412 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619449 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619461 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619521 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619534 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619621 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619681 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: E0319 09:18:25.619703 7385 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: E0319 09:18:25.619752 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.11973629 +0000 UTC m=+1.794166111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : secret "metrics-daemon-secret" not found Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619761 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619780 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619832 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619837 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619886 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: E0319 09:18:25.619924 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.619930 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: E0319 09:18:25.619988 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.119952366 +0000 UTC m=+1.794382117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.620011 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.620012 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.620055 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.620066 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.620094 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t58zw\" (UniqueName: \"kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:25.620148 master-0 kubenswrapper[7385]: I0319 09:18:25.620163 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620192 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620325 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620351 7385 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620371 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620416 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.120403219 +0000 UTC m=+1.794832920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620441 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620500 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620579 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620607 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620681 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620716 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620720 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620745 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620777 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620811 7385 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620820 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620845 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.120834061 +0000 UTC m=+1.795263762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620863 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620868 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.120856582 +0000 UTC m=+1.795286303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620915 7385 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620943 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: E0319 09:18:25.620956 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.120942604 +0000 UTC m=+1.795372315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620983 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621036 master-0 kubenswrapper[7385]: I0319 09:18:25.620988 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621063 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621112 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621181 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621218 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621235 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621264 7385 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621306 7385 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.620597 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621315 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621341 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.121332706 +0000 UTC m=+1.795762407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621445 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621460 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621519 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.12150352 +0000 UTC m=+1.795933231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621558 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.121531091 +0000 UTC m=+1.795960802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621578 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621599 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621673 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621758 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621793 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621833 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621794 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621869 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621867 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: E0319 09:18:25.621899 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.121889481 +0000 UTC m=+1.796319182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621903 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:25.621964 master-0 kubenswrapper[7385]: I0319 09:18:25.621939 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.622867 master-0 kubenswrapper[7385]: I0319 09:18:25.622009 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:25.622867 master-0 kubenswrapper[7385]: I0319 09:18:25.622033 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.622867 master-0 kubenswrapper[7385]: I0319 09:18:25.622097 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.633428 master-0 kubenswrapper[7385]: I0319 09:18:25.633379 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 09:18:25.637396 master-0 kubenswrapper[7385]: I0319 09:18:25.637363 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:25.654233 master-0 kubenswrapper[7385]: I0319 09:18:25.654172 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 09:18:25.658613 master-0 kubenswrapper[7385]: I0319 09:18:25.658567 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.659892 master-0 kubenswrapper[7385]: I0319 09:18:25.659822 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:25.673225 master-0 kubenswrapper[7385]: I0319 09:18:25.673178 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 09:18:25.695345 master-0 kubenswrapper[7385]: I0319 09:18:25.695300 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 09:18:25.698313 master-0 kubenswrapper[7385]: E0319 09:18:25.698273 7385 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:25.698369 master-0 kubenswrapper[7385]: E0319 09:18:25.698342 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:26.198325321 +0000 UTC m=+1.872755022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:25.726862 master-0 kubenswrapper[7385]: I0319 09:18:25.713725 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 09:18:25.726862 master-0 kubenswrapper[7385]: I0319 09:18:25.723379 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:25.726862 master-0 kubenswrapper[7385]: I0319 09:18:25.723472 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:25.733691 master-0 kubenswrapper[7385]: I0319 09:18:25.733652 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 09:18:25.753281 master-0 kubenswrapper[7385]: I0319 09:18:25.753254 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 09:18:25.774669 master-0 kubenswrapper[7385]: I0319 09:18:25.774603 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 09:18:25.782685 master-0 kubenswrapper[7385]: I0319 09:18:25.782658 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:25.794088 master-0 kubenswrapper[7385]: I0319 09:18:25.794047 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 09:18:25.813891 master-0 kubenswrapper[7385]: I0319 09:18:25.813637 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 09:18:25.816285 master-0 kubenswrapper[7385]: I0319 09:18:25.816239 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:25.834571 master-0 kubenswrapper[7385]: I0319 09:18:25.834520 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 09:18:25.838772 master-0 kubenswrapper[7385]: I0319 09:18:25.838742 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:25.854488 master-0 kubenswrapper[7385]: I0319 09:18:25.854446 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 09:18:25.855662 master-0 kubenswrapper[7385]: I0319 09:18:25.855633 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.857979 master-0 kubenswrapper[7385]: I0319 09:18:25.857948 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:25.894159 master-0 kubenswrapper[7385]: I0319 09:18:25.894117 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 09:18:25.896295 master-0 kubenswrapper[7385]: I0319 09:18:25.896268 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.914360 master-0 kubenswrapper[7385]: I0319 09:18:25.914313 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 09:18:25.921652 master-0 kubenswrapper[7385]: I0319 09:18:25.921597 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:25.934143 master-0 kubenswrapper[7385]: I0319 09:18:25.934115 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 09:18:25.940175 master-0 kubenswrapper[7385]: I0319 09:18:25.940147 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:25.964809 master-0 kubenswrapper[7385]: I0319 09:18:25.964755 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbzvl\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:25.985499 master-0 kubenswrapper[7385]: I0319 09:18:25.985415 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7plsz\" (UniqueName: \"kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:26.005860 master-0 kubenswrapper[7385]: I0319 09:18:26.005807 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5fk\" (UniqueName: \"kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:18:26.026516 master-0 kubenswrapper[7385]: I0319 09:18:26.026449 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svkc\" (UniqueName: \"kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:26.044823 master-0 kubenswrapper[7385]: I0319 09:18:26.044778 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdvd\" (UniqueName: \"kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:18:26.064865 master-0 kubenswrapper[7385]: I0319 09:18:26.064818 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8bm4\" (UniqueName: \"kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:18:26.070864 master-0 kubenswrapper[7385]: I0319 09:18:26.068356 7385 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 09:18:26.084714 master-0 kubenswrapper[7385]: I0319 09:18:26.084664 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnjq\" (UniqueName: \"kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:26.104057 master-0 kubenswrapper[7385]: I0319 09:18:26.104001 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85vjd\" (UniqueName: \"kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:18:26.125870 master-0 kubenswrapper[7385]: I0319 09:18:26.123717 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnp7\" (UniqueName: \"kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:26.128127 master-0 kubenswrapper[7385]: I0319 09:18:26.127949 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:26.128127 master-0 kubenswrapper[7385]: I0319 09:18:26.128031 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:26.128269 master-0 kubenswrapper[7385]: E0319 09:18:26.128130 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:26.128269 master-0 kubenswrapper[7385]: E0319 09:18:26.128145 7385 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:26.128269 master-0 kubenswrapper[7385]: E0319 09:18:26.128205 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.128190347 +0000 UTC m=+2.802620048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:26.128428 master-0 kubenswrapper[7385]: I0319 09:18:26.128300 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:26.128428 master-0 kubenswrapper[7385]: E0319 09:18:26.128368 7385 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 09:18:26.128428 master-0 kubenswrapper[7385]: E0319 09:18:26.128375 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.12830845 +0000 UTC m=+2.802738171 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:26.128533 master-0 kubenswrapper[7385]: I0319 09:18:26.128508 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:26.128597 master-0 kubenswrapper[7385]: E0319 09:18:26.128536 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.128515826 +0000 UTC m=+2.802945577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : secret "metrics-daemon-secret" not found Mar 19 09:18:26.128641 master-0 kubenswrapper[7385]: I0319 09:18:26.128605 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:26.128684 master-0 kubenswrapper[7385]: E0319 09:18:26.128656 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:26.128684 master-0 kubenswrapper[7385]: I0319 09:18:26.128662 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128704 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.128693401 +0000 UTC m=+2.803123122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128725 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128756 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.128747053 +0000 UTC m=+2.803176754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.128726 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.128813 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128775 7385 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.128862 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128790 7385 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128887 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.128877036 +0000 UTC m=+2.803306757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128915 7385 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128935 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128947 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.128938788 +0000 UTC m=+2.803368489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.128962 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.128954009 +0000 UTC m=+2.803383710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.128977 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.128997 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129032 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.129024131 +0000 UTC m=+2.803453972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129150 7385 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.129175 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129231 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.129207466 +0000 UTC m=+2.803637197 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129244 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129269 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.129262607 +0000 UTC m=+2.803692308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.129285 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129300 7385 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129334 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129338 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.129327169 +0000 UTC m=+2.803756930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129353 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.12934736 +0000 UTC m=+2.803777061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.129304 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: I0319 09:18:26.129375 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129380 7385 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129408 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.129398241 +0000 UTC m=+2.803827942 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129444 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:26.130319 master-0 kubenswrapper[7385]: E0319 09:18:26.129470 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.129464114 +0000 UTC m=+2.803893815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:26.143957 master-0 kubenswrapper[7385]: I0319 09:18:26.143921 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8cg\" (UniqueName: \"kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:26.165129 master-0 kubenswrapper[7385]: I0319 09:18:26.165090 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll8k\" (UniqueName: \"kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:26.183483 master-0 kubenswrapper[7385]: I0319 09:18:26.183447 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hccqk\" (UniqueName: \"kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:26.205606 master-0 kubenswrapper[7385]: I0319 09:18:26.205575 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjhk\" (UniqueName: \"kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:26.226578 master-0 kubenswrapper[7385]: I0319 09:18:26.226521 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:26.230507 master-0 kubenswrapper[7385]: I0319 09:18:26.230472 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:26.230648 master-0 kubenswrapper[7385]: E0319 09:18:26.230629 7385 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:26.230698 master-0 kubenswrapper[7385]: E0319 09:18:26.230686 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:27.230672375 +0000 UTC m=+2.905102076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:26.246332 master-0 kubenswrapper[7385]: I0319 09:18:26.246255 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4n26\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:26.264647 master-0 kubenswrapper[7385]: I0319 09:18:26.264628 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7thvr\" (UniqueName: \"kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:18:26.286943 master-0 kubenswrapper[7385]: I0319 09:18:26.286920 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpcnv\" (UniqueName: \"kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:18:26.307131 master-0 kubenswrapper[7385]: I0319 09:18:26.307101 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:18:26.325743 master-0 kubenswrapper[7385]: I0319 09:18:26.325708 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmdk\" (UniqueName: \"kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:18:26.345944 master-0 kubenswrapper[7385]: I0319 09:18:26.345904 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c654s\" (UniqueName: \"kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:18:26.364105 master-0 kubenswrapper[7385]: I0319 09:18:26.364049 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfnn\" (UniqueName: \"kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:26.386717 master-0 kubenswrapper[7385]: I0319 09:18:26.386687 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt99t\" (UniqueName: \"kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:26.407012 master-0 kubenswrapper[7385]: I0319 09:18:26.406975 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:18:26.425580 master-0 kubenswrapper[7385]: I0319 09:18:26.425555 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7rj\" (UniqueName: \"kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:18:26.447916 master-0 kubenswrapper[7385]: I0319 09:18:26.447882 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h925l\" (UniqueName: \"kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:26.463793 master-0 kubenswrapper[7385]: I0319 09:18:26.463752 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvtc\" (UniqueName: \"kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:18:26.486418 master-0 kubenswrapper[7385]: I0319 09:18:26.486373 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbw6q\" (UniqueName: \"kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:18:26.505902 master-0 kubenswrapper[7385]: I0319 09:18:26.505808 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:26.524478 master-0 kubenswrapper[7385]: I0319 09:18:26.524435 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbktm\" (UniqueName: \"kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm\") pod \"csi-snapshot-controller-operator-5f5d689c6b-d89zz\" (UID: \"43fca1a4-4fa7-4a43-b9c4-7f50a8737643\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:18:26.532197 master-0 kubenswrapper[7385]: I0319 09:18:26.532172 7385 request.go:700] Waited for 1.015429631s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token Mar 19 09:18:26.543536 master-0 kubenswrapper[7385]: I0319 09:18:26.543495 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr4bl\" (UniqueName: \"kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:18:26.564001 master-0 kubenswrapper[7385]: I0319 09:18:26.563970 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:26.584245 master-0 kubenswrapper[7385]: I0319 09:18:26.584192 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnxk9\" (UniqueName: \"kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:18:26.605925 master-0 kubenswrapper[7385]: I0319 09:18:26.605885 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47plx\" (UniqueName: \"kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:26.624189 master-0 kubenswrapper[7385]: I0319 09:18:26.624167 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:18:26.644619 master-0 kubenswrapper[7385]: I0319 09:18:26.644581 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8b7s\" (UniqueName: \"kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:26.663824 master-0 kubenswrapper[7385]: I0319 09:18:26.663796 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2hfh\" (UniqueName: \"kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:18:26.686135 master-0 kubenswrapper[7385]: I0319 09:18:26.686104 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58zw\" (UniqueName: \"kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:18:26.698881 master-0 kubenswrapper[7385]: E0319 09:18:26.698791 7385 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t58zw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-p9bbz_openshift-network-operator(672ad0aa-a0c5-4640-840d-3ffa02c55d62): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 19 09:18:26.700670 master-0 kubenswrapper[7385]: E0319 09:18:26.700599 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-network-operator/iptables-alerter-p9bbz" podUID="672ad0aa-a0c5-4640-840d-3ffa02c55d62" Mar 19 09:18:26.708002 master-0 kubenswrapper[7385]: I0319 09:18:26.707972 7385 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 19 09:18:26.712659 master-0 kubenswrapper[7385]: I0319 09:18:26.712635 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:26.976307 master-0 kubenswrapper[7385]: I0319 09:18:26.976248 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:27.061072 master-0 kubenswrapper[7385]: I0319 09:18:27.060970 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=5.06095305 podStartE2EDuration="5.06095305s" podCreationTimestamp="2026-03-19 09:18:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:26.961529701 +0000 UTC m=+2.635959402" watchObservedRunningTime="2026-03-19 09:18:27.06095305 +0000 UTC m=+2.735382751" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145128 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145174 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145191 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145212 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145236 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145260 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145281 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145302 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145322 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145337 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145352 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145368 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145382 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145398 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:27.145553 master-0 kubenswrapper[7385]: I0319 09:18:27.145417 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146093 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146143 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.14613038 +0000 UTC m=+4.820560081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146442 7385 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146463 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146455151 +0000 UTC m=+4.820884852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146494 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146511 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146505192 +0000 UTC m=+4.820934893 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146558 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146576 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146569414 +0000 UTC m=+4.820999115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146606 7385 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146621 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146616315 +0000 UTC m=+4.821046016 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146665 7385 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146681 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146676567 +0000 UTC m=+4.821106268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146688 7385 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146718 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146731 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146717788 +0000 UTC m=+4.821147489 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146747 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146739729 +0000 UTC m=+4.821169430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146753 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146770 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146764039 +0000 UTC m=+4.821193740 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146804 7385 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146820 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146814161 +0000 UTC m=+4.821243862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : secret "metrics-daemon-secret" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146825 7385 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146842 7385 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146854 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146844862 +0000 UTC m=+4.821274563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146868 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146860482 +0000 UTC m=+4.821290183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146876 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146889 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146884613 +0000 UTC m=+4.821314314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146918 7385 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146924 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146933 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146929124 +0000 UTC m=+4.821358825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:27.146953 master-0 kubenswrapper[7385]: E0319 09:18:27.146949 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.146941594 +0000 UTC m=+4.821371295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:27.246894 master-0 kubenswrapper[7385]: I0319 09:18:27.246788 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:27.247046 master-0 kubenswrapper[7385]: E0319 09:18:27.246974 7385 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:27.247046 master-0 kubenswrapper[7385]: E0319 09:18:27.247017 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:29.247002372 +0000 UTC m=+4.921432073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:27.539026 master-0 kubenswrapper[7385]: I0319 09:18:27.538821 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:27.546148 master-0 kubenswrapper[7385]: I0319 09:18:27.545803 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:28.081239 master-0 kubenswrapper[7385]: I0319 09:18:28.077430 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:30.671263 master-0 kubenswrapper[7385]: I0319 09:18:28.454579 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:30.671263 master-0 kubenswrapper[7385]: I0319 09:18:28.458233 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.851820 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.851879 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.851916 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.851952 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.851978 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852019 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852045 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852069 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852102 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852140 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852166 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852191 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852221 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852250 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852282 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: I0319 09:18:31.852324 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.852595 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.852678 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.852655773 +0000 UTC m=+11.527085474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853171 7385 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853226 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853212929 +0000 UTC m=+11.527642630 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : secret "metrics-daemon-secret" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853277 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853300 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853292952 +0000 UTC m=+11.527722653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853349 7385 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853377 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853368394 +0000 UTC m=+11.527798095 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853430 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853455 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853446806 +0000 UTC m=+11.527876517 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853503 7385 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853526 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853518888 +0000 UTC m=+11.527948589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853727 7385 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853760 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853751355 +0000 UTC m=+11.528181066 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853809 7385 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853832 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853824277 +0000 UTC m=+11.528253978 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853879 7385 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853906 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853898029 +0000 UTC m=+11.528327730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.853978 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.854005 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.853996552 +0000 UTC m=+11.528426253 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.854052 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.854075 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.854067364 +0000 UTC m=+11.528497065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.854124 7385 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.854149 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.854141056 +0000 UTC m=+11.528570757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.854200 7385 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:31.854509 master-0 kubenswrapper[7385]: E0319 09:18:31.854227 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.854218298 +0000 UTC m=+11.528647999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:31.857096 master-0 kubenswrapper[7385]: E0319 09:18:31.855620 7385 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:31.857096 master-0 kubenswrapper[7385]: E0319 09:18:31.855722 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.855702801 +0000 UTC m=+11.530132562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:31.857096 master-0 kubenswrapper[7385]: E0319 09:18:31.855839 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:31.857096 master-0 kubenswrapper[7385]: E0319 09:18:31.855874 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.855864776 +0000 UTC m=+11.530294557 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:31.862845 master-0 kubenswrapper[7385]: E0319 09:18:31.862767 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:31.863069 master-0 kubenswrapper[7385]: E0319 09:18:31.862940 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:35.862909468 +0000 UTC m=+11.537339169 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: I0319 09:18:31.869325 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" podUID="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" containerName="nbdb" probeResult="failure" output=< Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + . /ovnkube-lib/ovnkube-lib.sh Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ set -x Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ K8S_NODE=master-0 Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ [[ -n master-0 ]] Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ [[ -f /env/master-0 ]] Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ northd_pidfile=/var/run/ovn/ovn-northd.pid Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ controller_pidfile=/var/run/ovn/ovn-controller.pid Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ controller_logfile=/var/log/ovn/acl-audit-log.log Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ vswitch_dbsock=/var/run/openvswitch/db.sock Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ nbdb_sock=/var/run/ovn/ovnnb_db.sock Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ sbdb_sock=/var/run/ovn/ovnsb_db.sock Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + ovndb-readiness-probe nb Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + local dbname=nb Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + [[ 1 -ne 1 ]] Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + local ctlfile Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + [[ nb = \n\b ]] Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + ctlfile=/var/run/ovn/ovnnb_db.ctl Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=3 ovsdb-server/sync-status Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ grep 'state: active' Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: ++ false Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: + status= Mar 19 09:18:31.869573 master-0 kubenswrapper[7385]: > Mar 19 09:18:31.900891 master-0 kubenswrapper[7385]: I0319 09:18:31.900743 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:31.900891 master-0 kubenswrapper[7385]: I0319 09:18:31.900812 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:31.901171 master-0 kubenswrapper[7385]: I0319 09:18:31.900943 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:31.901171 master-0 kubenswrapper[7385]: I0319 09:18:31.900964 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:31.901171 master-0 kubenswrapper[7385]: I0319 09:18:31.901043 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:31.901171 master-0 kubenswrapper[7385]: I0319 09:18:31.901106 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:31.901369 master-0 kubenswrapper[7385]: I0319 09:18:31.901286 7385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:18:31.901369 master-0 kubenswrapper[7385]: I0319 09:18:31.901299 7385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:18:31.909159 master-0 kubenswrapper[7385]: I0319 09:18:31.909086 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:32.659010 master-0 kubenswrapper[7385]: E0319 09:18:32.658946 7385 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427" Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: E0319 09:18:32.659170 7385 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: echo "Copying system trust bundle" Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: fi Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3fdcbf7be3f90bd080ffb2c75b091d7eef03681e0f90912ff6140ee48c177616,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.35_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8bm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-5885bfd7f4-z9khh_openshift-authentication-operator(fe1881fb-c670-442a-a092-c1eee6b7d5e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Mar 19 09:18:32.659205 master-0 kubenswrapper[7385]: > logger="UnhandledError" Mar 19 09:18:32.660723 master-0 kubenswrapper[7385]: E0319 09:18:32.660670 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" podUID="fe1881fb-c670-442a-a092-c1eee6b7d5e5" Mar 19 09:18:32.867314 master-0 kubenswrapper[7385]: I0319 09:18:32.867250 7385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:18:33.175398 master-0 kubenswrapper[7385]: E0319 09:18:33.175117 7385 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27" Mar 19 09:18:33.175398 master-0 kubenswrapper[7385]: E0319 09:18:33.175326 7385 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:insights-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27,Command:[],Args:[start --config=/etc/insights-operator/server.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELEASE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{56623104 0} {} 54Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:snapshots,ReadOnly:false,MountPath:/var/lib/insights-operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnxk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000260000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod insights-operator-68bf6ff9d6-h4zrl_openshift-insights(70e8c62b-97c3-4c0c-85d3-f660118831fd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 09:18:33.176787 master-0 kubenswrapper[7385]: E0319 09:18:33.176633 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" podUID="70e8c62b-97c3-4c0c-85d3-f660118831fd" Mar 19 09:18:33.665529 master-0 kubenswrapper[7385]: I0319 09:18:33.665474 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:33.669133 master-0 kubenswrapper[7385]: I0319 09:18:33.669083 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:18:33.704139 master-0 kubenswrapper[7385]: E0319 09:18:33.704093 7385 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310" Mar 19 09:18:33.704424 master-0 kubenswrapper[7385]: E0319 09:18:33.704337 7385 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cluster-storage-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310,Command:[cluster-storage-operator start],Args:[start -v=2 --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:AWS_EBS_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6f8be2ccd34c5347b290d853b5d7a8d746d13d2f5d2828da73c16a8eb6d5af67,ValueFrom:nil,},EnvVar{Name:AWS_EBS_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f9a5f406ad4ce6ecadd3e2590848bc4b5de5ab1cb5d0bb753b98188a28c44956,ValueFrom:nil,},EnvVar{Name:GCP_PD_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78861d4efdfa2f7b109402745c586e7e0be2529fa1d9a26b0ad3ddf3e8020953,ValueFrom:nil,},EnvVar{Name:GCP_PD_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f422ffe6b712af5b5ee25b68e73cda9554f145a58ac26b02b5a750f5c5dd126d,ValueFrom:nil,},EnvVar{Name:OPENSTACK_CINDER_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f0d505a150af2e1cc3a499a3ebfa2209ee91ad2dc51ae193947b6a5c594b206,ValueFrom:nil,},EnvVar{Name:OPENSTACK_CINDER_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8bd95b84c33750b6a5d68ff914c99418bcac9138d5b20a0465d95bfd6a16b86,ValueFrom:nil,},EnvVar{Name:OVIRT_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4c2f0ac2d23a17393606070219485ba5d974d45328077daba15411925771795,ValueFrom:nil,},EnvVar{Name:OVIRT_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78aa5be28be7d85f30d68230d193084a2ec6db6e8b67d91b99b9964b7832c3b5,ValueFrom:nil,},EnvVar{Name:MANILA_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f357238e2e91c79b804978401909536e3b9c657c994ab388d82ccc37406fa380,ValueFrom:nil,},EnvVar{Name:MANILA_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2f0bf8b84707646f962f446f9d8e27091796740abf15092d294625e6afb03c8,ValueFrom:nil,},EnvVar{Name:MANILA_NFS_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ed36d7e6dd64d1d4152dfc347cfe2c7a932541f66234887c2145fcf75ef3149,ValueFrom:nil,},EnvVar{Name:PROVISIONER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0de2b4097ae6231bf44e30b6724d34fd6ca3b075050479be4cabe5d5dc0847f5,ValueFrom:nil,},EnvVar{Name:ATTACHER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2af92f0e90f29d4329e3ad1e235e69bc2397e065bfbc6772d7e073701c7b1363,ValueFrom:nil,},EnvVar{Name:RESIZER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f42a877d794962193d792163231e076195b1451f9d09260a4c833b7d587c217,ValueFrom:nil,},EnvVar{Name:SNAPSHOTTER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a7ce31e7a0bb2c38d29ced899704249479ce280f444d211b468b907d367f4f70,ValueFrom:nil,},EnvVar{Name:NODE_DRIVER_REGISTRAR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8a31138bdae4975c69f6c9ad5fb30ce438f4bd6ae05eb8fd7db07924729855c,ValueFrom:nil,},EnvVar{Name:LIVENESS_PROBE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dca644664dcc5649c27b5b0d55102bcd46c0d25de3e63f96866b81f7cb1d90cc,ValueFrom:nil,},EnvVar{Name:VSPHERE_PROBLEM_DETECTOR_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ad103effb35bf1b15fb53ef8f3d77a563dee94e7a6703924b377b31ac7754ba2,ValueFrom:nil,},EnvVar{Name:AZURE_DISK_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe31f5b78d8ec974d4e30efa3524849ebdc534bd5e83b6b8789944322ee9b9ff,ValueFrom:nil,},EnvVar{Name:AZURE_DISK_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc78ae6ff45a27c111fff14e9d15a2e9982f97577722fe519630a018ebd64a5e,ValueFrom:nil,},EnvVar{Name:AZURE_FILE_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c7ede995e9f063c14d14db7d70ee4ddb5e098b36033ca7479593abb1e34c1f0f,ValueFrom:nil,},EnvVar{Name:AZURE_FILE_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13a3fe1b64974d4b2ea6bebddbc974b777556820de3dbd204e8a5b634e7a76a5,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f52b7d31de7697dd95b0addb28b5a270e2e2a8e37543a16696aaadcaf7a14756,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b6029487b019751b36752e15a5afd5db73fe449798b0df7e7465fe47353b8271,ValueFrom:nil,},EnvVar{Name:VMWARE_VSPHERE_SYNCER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0509bd17634879a7e7c73a96a6cfe4be00f98e3ce7258733d0d6bb7f8a95b91f,ValueFrom:nil,},EnvVar{Name:CLUSTER_CLOUD_CONTROLLER_MANAGER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5,ValueFrom:nil,},EnvVar{Name:IBM_VPC_BLOCK_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53284e11f5db88ec68f5ac7fdd1d42b26e62fde221368f8a1b8f918ed6b38d4f,ValueFrom:nil,},EnvVar{Name:IBM_VPC_BLOCK_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101edd497d95ff956953bb01124b8f81d6d0691e2a44a76c88dd8260299ff382,ValueFrom:nil,},EnvVar{Name:POWERVS_BLOCK_CSI_DRIVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4dcc3f2648915ed6887ff9db0c8d45b5487e3acdd7eb832ff6e7d579846ed90b,ValueFrom:nil,},EnvVar{Name:POWERVS_BLOCK_CSI_DRIVER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:284d17aa10d048eb7e39956681248cc31caa37aedde5edcd72181d12f1beaa43,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67c988e079558dc6b20232ebf9a7f7276fee60c756caed584c9715e0bec77a5a,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cluster-storage-operator-serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8s7rj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-storage-operator-7d87854d6-cgsgk_openshift-cluster-storage-operator(525b41b5-82d8-4d47-8350-79644a2c9360): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 09:18:33.705658 master-0 kubenswrapper[7385]: E0319 09:18:33.705559 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" podUID="525b41b5-82d8-4d47-8350-79644a2c9360" Mar 19 09:18:34.182838 master-0 kubenswrapper[7385]: E0319 09:18:34.182785 7385 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" Mar 19 09:18:34.183179 master-0 kubenswrapper[7385]: E0319 09:18:34.182937 7385 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2hfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-b865698dc-pfs65_openshift-service-ca-operator(012cdc1d-ebc8-431e-9a52-9a39de95dd0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 19 09:18:34.184383 master-0 kubenswrapper[7385]: E0319 09:18:34.184325 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" podUID="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" Mar 19 09:18:34.598225 master-0 kubenswrapper[7385]: I0319 09:18:34.597877 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-lql9l"] Mar 19 09:18:34.811163 master-0 kubenswrapper[7385]: I0319 09:18:34.809558 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:34.811163 master-0 kubenswrapper[7385]: I0319 09:18:34.809986 7385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:18:34.811163 master-0 kubenswrapper[7385]: I0319 09:18:34.809998 7385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:18:34.838089 master-0 kubenswrapper[7385]: I0319 09:18:34.837774 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:34.884579 master-0 kubenswrapper[7385]: I0319 09:18:34.884476 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" event={"ID":"70258988-8374-4aee-aaa2-be3c2e853062","Type":"ContainerStarted","Data":"48e3bb33c4cfc2acfda10baf096f5ef90778cf5f988e45ef005dd24496a67e52"} Mar 19 09:18:34.889505 master-0 kubenswrapper[7385]: I0319 09:18:34.889233 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" event={"ID":"43fca1a4-4fa7-4a43-b9c4-7f50a8737643","Type":"ContainerStarted","Data":"ea1f7d359b6ee07950af03d5716d56f99f195491d0e7434e7ef9e53aca7d8ce6"} Mar 19 09:18:34.900570 master-0 kubenswrapper[7385]: I0319 09:18:34.898245 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" event={"ID":"53bff8e4-bf60-4386-8905-49d43fd6c420","Type":"ContainerStarted","Data":"3c8b4e82c1555c09e55296bfca35644f6006a9bed8037eabe78692b05714698a"} Mar 19 09:18:34.901142 master-0 kubenswrapper[7385]: I0319 09:18:34.901102 7385 generic.go:334] "Generic (PLEG): container finished" podID="17e0cb4a-e776-4886-927e-ae446af7f234" containerID="2e89abc0f17fc465edcdc9ff26f6e87d57f135c537e0e0141992b6c68f2869ef" exitCode=0 Mar 19 09:18:34.901209 master-0 kubenswrapper[7385]: I0319 09:18:34.901184 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" event={"ID":"17e0cb4a-e776-4886-927e-ae446af7f234","Type":"ContainerDied","Data":"2e89abc0f17fc465edcdc9ff26f6e87d57f135c537e0e0141992b6c68f2869ef"} Mar 19 09:18:34.905082 master-0 kubenswrapper[7385]: I0319 09:18:34.905045 7385 generic.go:334] "Generic (PLEG): container finished" podID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerID="df015e37363b9eb628b8a08ca5e9d7aac56b16bc451c8914eb82e1273a54c66d" exitCode=0 Mar 19 09:18:34.905343 master-0 kubenswrapper[7385]: I0319 09:18:34.905201 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerDied","Data":"df015e37363b9eb628b8a08ca5e9d7aac56b16bc451c8914eb82e1273a54c66d"} Mar 19 09:18:34.908422 master-0 kubenswrapper[7385]: I0319 09:18:34.907301 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" event={"ID":"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4","Type":"ContainerStarted","Data":"13d37b6e0fd525b422b8c24e6c520e3e647d99050d3e3d8fce7cd4856511e27f"} Mar 19 09:18:34.916747 master-0 kubenswrapper[7385]: I0319 09:18:34.916709 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" event={"ID":"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff","Type":"ContainerStarted","Data":"edc97cab8d1c4b85265dcfce231bf29161c0caac67a28ad74d915ec1fff0a681"} Mar 19 09:18:34.921201 master-0 kubenswrapper[7385]: I0319 09:18:34.921178 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" event={"ID":"a67ae8dc-240d-4708-9139-1d49c601e552","Type":"ContainerStarted","Data":"69c48f90f075a2cd2e8836a6c9cf1524c6d05160f72475eb6e7ea35e49cf68db"} Mar 19 09:18:34.923003 master-0 kubenswrapper[7385]: I0319 09:18:34.922981 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-lql9l" event={"ID":"6cc45721-c05b-4161-91d9-d65cf6ec61d4","Type":"ContainerStarted","Data":"95ba2410ddef95b73255b626bcb3103473682323c747d1bdc3de3394ddb254e9"} Mar 19 09:18:34.923105 master-0 kubenswrapper[7385]: I0319 09:18:34.923087 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-lql9l" event={"ID":"6cc45721-c05b-4161-91d9-d65cf6ec61d4","Type":"ContainerStarted","Data":"5133c097ddac4c4eb3bf47ec178286cfda103ff21a8e794c8ccd120974cf84fe"} Mar 19 09:18:34.923579 master-0 kubenswrapper[7385]: I0319 09:18:34.923562 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:18:34.932953 master-0 kubenswrapper[7385]: I0319 09:18:34.932929 7385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:18:35.910824 master-0 kubenswrapper[7385]: I0319 09:18:35.910433 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:35.910824 master-0 kubenswrapper[7385]: I0319 09:18:35.910796 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:35.910824 master-0 kubenswrapper[7385]: I0319 09:18:35.910820 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:35.910824 master-0 kubenswrapper[7385]: I0319 09:18:35.910838 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:35.910824 master-0 kubenswrapper[7385]: I0319 09:18:35.910854 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:35.910824 master-0 kubenswrapper[7385]: I0319 09:18:35.910872 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.910894 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.910912 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.910932 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.910950 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.910967 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.910988 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.911009 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.911029 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.911053 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: I0319 09:18:35.911073 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911172 7385 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911214 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.911200892 +0000 UTC m=+19.585630593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : secret "metrics-daemon-secret" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911816 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911843 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.91183597 +0000 UTC m=+19.586265671 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911878 7385 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911896 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.911890502 +0000 UTC m=+19.586320203 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911926 7385 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911940 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.911935644 +0000 UTC m=+19.586365345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911969 7385 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.911984 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.911978995 +0000 UTC m=+19.586408696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912012 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912029 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912023527 +0000 UTC m=+19.586453228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912056 7385 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912071 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912066358 +0000 UTC m=+19.586496059 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912097 7385 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912112 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912107809 +0000 UTC m=+19.586537510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912140 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912154 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.91214966 +0000 UTC m=+19.586579351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912180 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912201 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912194091 +0000 UTC m=+19.586623792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912238 7385 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912259 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912252893 +0000 UTC m=+19.586682594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912289 7385 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912310 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912303014 +0000 UTC m=+19.586732715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:35.912311 master-0 kubenswrapper[7385]: E0319 09:18:35.912345 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:35.913717 master-0 kubenswrapper[7385]: E0319 09:18:35.912363 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912357766 +0000 UTC m=+19.586787457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:35.913717 master-0 kubenswrapper[7385]: E0319 09:18:35.912389 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:35.913717 master-0 kubenswrapper[7385]: E0319 09:18:35.912404 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912399337 +0000 UTC m=+19.586829038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:35.913717 master-0 kubenswrapper[7385]: E0319 09:18:35.912430 7385 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:35.913717 master-0 kubenswrapper[7385]: E0319 09:18:35.912445 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.912440298 +0000 UTC m=+19.586869999 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:35.913717 master-0 kubenswrapper[7385]: E0319 09:18:35.912475 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:35.913717 master-0 kubenswrapper[7385]: E0319 09:18:35.912491 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:43.91248601 +0000 UTC m=+19.586915711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:35.938605 master-0 kubenswrapper[7385]: I0319 09:18:35.938536 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" event={"ID":"ca2f7cb3-8812-4fe3-83a5-61668ef87f99","Type":"ContainerStarted","Data":"3335c7fc18f5f7e2694a86064d55e2221326f9866ff420531a852d42c29d0c0d"} Mar 19 09:18:35.946776 master-0 kubenswrapper[7385]: I0319 09:18:35.946740 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-nsnds"] Mar 19 09:18:35.946991 master-0 kubenswrapper[7385]: E0319 09:18:35.946872 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerName="prober" Mar 19 09:18:35.946991 master-0 kubenswrapper[7385]: I0319 09:18:35.946883 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerName="prober" Mar 19 09:18:35.946991 master-0 kubenswrapper[7385]: E0319 09:18:35.946893 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:18:35.946991 master-0 kubenswrapper[7385]: I0319 09:18:35.946898 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:18:35.946991 master-0 kubenswrapper[7385]: I0319 09:18:35.946956 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="307996ba-f4bd-4504-bf14-2d5a7a101016" containerName="prober" Mar 19 09:18:35.946991 master-0 kubenswrapper[7385]: I0319 09:18:35.946963 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:18:35.947282 master-0 kubenswrapper[7385]: I0319 09:18:35.947268 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" Mar 19 09:18:35.948857 master-0 kubenswrapper[7385]: I0319 09:18:35.948826 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 09:18:35.949006 master-0 kubenswrapper[7385]: I0319 09:18:35.948980 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 09:18:35.957263 master-0 kubenswrapper[7385]: I0319 09:18:35.957227 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-nsnds"] Mar 19 09:18:36.113848 master-0 kubenswrapper[7385]: I0319 09:18:36.113795 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jns5r\" (UniqueName: \"kubernetes.io/projected/3eeb72c3-1a56-4955-845e-81607513b1b2-kube-api-access-jns5r\") pod \"migrator-8487694857-nsnds\" (UID: \"3eeb72c3-1a56-4955-845e-81607513b1b2\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" Mar 19 09:18:36.191034 master-0 kubenswrapper[7385]: I0319 09:18:36.190925 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8"] Mar 19 09:18:36.192642 master-0 kubenswrapper[7385]: I0319 09:18:36.191423 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" Mar 19 09:18:36.204567 master-0 kubenswrapper[7385]: I0319 09:18:36.203400 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8"] Mar 19 09:18:36.214956 master-0 kubenswrapper[7385]: I0319 09:18:36.214910 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jns5r\" (UniqueName: \"kubernetes.io/projected/3eeb72c3-1a56-4955-845e-81607513b1b2-kube-api-access-jns5r\") pod \"migrator-8487694857-nsnds\" (UID: \"3eeb72c3-1a56-4955-845e-81607513b1b2\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" Mar 19 09:18:36.215421 master-0 kubenswrapper[7385]: I0319 09:18:36.215384 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5hmg\" (UniqueName: \"kubernetes.io/projected/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b-kube-api-access-k5hmg\") pod \"csi-snapshot-controller-64854d9cff-blgk8\" (UID: \"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" Mar 19 09:18:36.245740 master-0 kubenswrapper[7385]: I0319 09:18:36.245692 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jns5r\" (UniqueName: \"kubernetes.io/projected/3eeb72c3-1a56-4955-845e-81607513b1b2-kube-api-access-jns5r\") pod \"migrator-8487694857-nsnds\" (UID: \"3eeb72c3-1a56-4955-845e-81607513b1b2\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" Mar 19 09:18:36.294569 master-0 kubenswrapper[7385]: I0319 09:18:36.284491 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" Mar 19 09:18:36.316560 master-0 kubenswrapper[7385]: I0319 09:18:36.316030 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hmg\" (UniqueName: \"kubernetes.io/projected/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b-kube-api-access-k5hmg\") pod \"csi-snapshot-controller-64854d9cff-blgk8\" (UID: \"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" Mar 19 09:18:36.334936 master-0 kubenswrapper[7385]: I0319 09:18:36.333853 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hmg\" (UniqueName: \"kubernetes.io/projected/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b-kube-api-access-k5hmg\") pod \"csi-snapshot-controller-64854d9cff-blgk8\" (UID: \"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" Mar 19 09:18:36.443308 master-0 kubenswrapper[7385]: I0319 09:18:36.443147 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-nsnds"] Mar 19 09:18:36.522596 master-0 kubenswrapper[7385]: I0319 09:18:36.519775 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" Mar 19 09:18:36.525939 master-0 kubenswrapper[7385]: I0319 09:18:36.525828 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-wh8b2"] Mar 19 09:18:36.526506 master-0 kubenswrapper[7385]: I0319 09:18:36.526399 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.529901 master-0 kubenswrapper[7385]: I0319 09:18:36.528504 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 09:18:36.529901 master-0 kubenswrapper[7385]: I0319 09:18:36.528535 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 09:18:36.529901 master-0 kubenswrapper[7385]: I0319 09:18:36.528648 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 09:18:36.529901 master-0 kubenswrapper[7385]: I0319 09:18:36.528778 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 09:18:36.529901 master-0 kubenswrapper[7385]: I0319 09:18:36.528866 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 09:18:36.529901 master-0 kubenswrapper[7385]: I0319 09:18:36.528948 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 09:18:36.540641 master-0 kubenswrapper[7385]: I0319 09:18:36.539595 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-wh8b2"] Mar 19 09:18:36.691565 master-0 kubenswrapper[7385]: I0319 09:18:36.691494 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8"] Mar 19 09:18:36.696242 master-0 kubenswrapper[7385]: W0319 09:18:36.696149 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde72ea6c_f3ce_41a5_9a43_9db4f27ed84b.slice/crio-33ca2f2b19a1770d26eec6f100c1e6f12e2c50ac6dbb0f1fd1d1831103d4af22 WatchSource:0}: Error finding container 33ca2f2b19a1770d26eec6f100c1e6f12e2c50ac6dbb0f1fd1d1831103d4af22: Status 404 returned error can't find the container with id 33ca2f2b19a1770d26eec6f100c1e6f12e2c50ac6dbb0f1fd1d1831103d4af22 Mar 19 09:18:36.722259 master-0 kubenswrapper[7385]: I0319 09:18:36.722203 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.722459 master-0 kubenswrapper[7385]: I0319 09:18:36.722374 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.722495 master-0 kubenswrapper[7385]: I0319 09:18:36.722466 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6554\" (UniqueName: \"kubernetes.io/projected/787209c8-18f5-4312-916b-1de630270c4a-kube-api-access-d6554\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.722495 master-0 kubenswrapper[7385]: I0319 09:18:36.722485 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.722577 master-0 kubenswrapper[7385]: I0319 09:18:36.722509 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.823125 master-0 kubenswrapper[7385]: I0319 09:18:36.823072 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6554\" (UniqueName: \"kubernetes.io/projected/787209c8-18f5-4312-916b-1de630270c4a-kube-api-access-d6554\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.823125 master-0 kubenswrapper[7385]: I0319 09:18:36.823122 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.823125 master-0 kubenswrapper[7385]: I0319 09:18:36.823144 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.823370 master-0 kubenswrapper[7385]: I0319 09:18:36.823171 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.823370 master-0 kubenswrapper[7385]: E0319 09:18:36.823298 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 19 09:18:36.823370 master-0 kubenswrapper[7385]: E0319 09:18:36.823336 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:36.823452 master-0 kubenswrapper[7385]: E0319 09:18:36.823427 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 19 09:18:36.823452 master-0 kubenswrapper[7385]: E0319 09:18:36.823352 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:37.323338533 +0000 UTC m=+12.997768234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : configmap "openshift-global-ca" not found Mar 19 09:18:36.823515 master-0 kubenswrapper[7385]: E0319 09:18:36.823474 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:37.323454286 +0000 UTC m=+12.997883997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : configmap "client-ca" not found Mar 19 09:18:36.823515 master-0 kubenswrapper[7385]: E0319 09:18:36.823498 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:37.323491087 +0000 UTC m=+12.997920798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : configmap "config" not found Mar 19 09:18:36.823599 master-0 kubenswrapper[7385]: I0319 09:18:36.823522 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.823732 master-0 kubenswrapper[7385]: E0319 09:18:36.823705 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:36.823799 master-0 kubenswrapper[7385]: E0319 09:18:36.823764 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:37.323754134 +0000 UTC m=+12.998183845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : secret "serving-cert" not found Mar 19 09:18:36.845299 master-0 kubenswrapper[7385]: I0319 09:18:36.845265 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6554\" (UniqueName: \"kubernetes.io/projected/787209c8-18f5-4312-916b-1de630270c4a-kube-api-access-d6554\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:36.943520 master-0 kubenswrapper[7385]: I0319 09:18:36.943414 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" event={"ID":"3eeb72c3-1a56-4955-845e-81607513b1b2","Type":"ContainerStarted","Data":"90b6bf31b6285b89ba457dc317b7de2db8799afd4d2c378edeab172c14801f77"} Mar 19 09:18:36.944271 master-0 kubenswrapper[7385]: I0319 09:18:36.944254 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerStarted","Data":"33ca2f2b19a1770d26eec6f100c1e6f12e2c50ac6dbb0f1fd1d1831103d4af22"} Mar 19 09:18:37.328032 master-0 kubenswrapper[7385]: I0319 09:18:37.327953 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:37.328032 master-0 kubenswrapper[7385]: I0319 09:18:37.328024 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:37.328276 master-0 kubenswrapper[7385]: E0319 09:18:37.328079 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 19 09:18:37.328276 master-0 kubenswrapper[7385]: E0319 09:18:37.328079 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:37.328276 master-0 kubenswrapper[7385]: E0319 09:18:37.328122 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:38.328109074 +0000 UTC m=+14.002538775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : configmap "openshift-global-ca" not found Mar 19 09:18:37.328276 master-0 kubenswrapper[7385]: E0319 09:18:37.328135 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:38.328130484 +0000 UTC m=+14.002560185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : configmap "client-ca" not found Mar 19 09:18:37.328276 master-0 kubenswrapper[7385]: I0319 09:18:37.328161 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:37.328276 master-0 kubenswrapper[7385]: I0319 09:18:37.328232 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:37.328450 master-0 kubenswrapper[7385]: E0319 09:18:37.328351 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:37.328450 master-0 kubenswrapper[7385]: E0319 09:18:37.328372 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:38.328365851 +0000 UTC m=+14.002795552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : secret "serving-cert" not found Mar 19 09:18:37.328450 master-0 kubenswrapper[7385]: E0319 09:18:37.328392 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 19 09:18:37.328450 master-0 kubenswrapper[7385]: E0319 09:18:37.328407 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:38.328402752 +0000 UTC m=+14.002832453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : configmap "config" not found Mar 19 09:18:37.478432 master-0 kubenswrapper[7385]: I0319 09:18:37.477572 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-wh8b2"] Mar 19 09:18:37.478432 master-0 kubenswrapper[7385]: E0319 09:18:37.477773 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" podUID="787209c8-18f5-4312-916b-1de630270c4a" Mar 19 09:18:37.495266 master-0 kubenswrapper[7385]: I0319 09:18:37.495209 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r"] Mar 19 09:18:37.495973 master-0 kubenswrapper[7385]: I0319 09:18:37.495939 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.498048 master-0 kubenswrapper[7385]: I0319 09:18:37.497910 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 09:18:37.498133 master-0 kubenswrapper[7385]: I0319 09:18:37.498072 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 09:18:37.498380 master-0 kubenswrapper[7385]: I0319 09:18:37.498351 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 09:18:37.498530 master-0 kubenswrapper[7385]: I0319 09:18:37.498508 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 09:18:37.499228 master-0 kubenswrapper[7385]: I0319 09:18:37.499203 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 09:18:37.506829 master-0 kubenswrapper[7385]: I0319 09:18:37.506776 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r"] Mar 19 09:18:37.535618 master-0 kubenswrapper[7385]: I0319 09:18:37.535529 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-config\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.535719 master-0 kubenswrapper[7385]: I0319 09:18:37.535691 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.535935 master-0 kubenswrapper[7385]: I0319 09:18:37.535909 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.535991 master-0 kubenswrapper[7385]: I0319 09:18:37.535945 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc924\" (UniqueName: \"kubernetes.io/projected/56aed905-9823-4165-9bcd-c4d7ce7bed90-kube-api-access-rc924\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.637279 master-0 kubenswrapper[7385]: I0319 09:18:37.637194 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-config\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.637642 master-0 kubenswrapper[7385]: I0319 09:18:37.637535 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.637954 master-0 kubenswrapper[7385]: E0319 09:18:37.637921 7385 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:37.638025 master-0 kubenswrapper[7385]: E0319 09:18:37.637998 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:38.137977798 +0000 UTC m=+13.812407499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : secret "serving-cert" not found Mar 19 09:18:37.638121 master-0 kubenswrapper[7385]: I0319 09:18:37.638081 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.638182 master-0 kubenswrapper[7385]: I0319 09:18:37.638124 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc924\" (UniqueName: \"kubernetes.io/projected/56aed905-9823-4165-9bcd-c4d7ce7bed90-kube-api-access-rc924\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.638296 master-0 kubenswrapper[7385]: I0319 09:18:37.638259 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-config\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.638497 master-0 kubenswrapper[7385]: E0319 09:18:37.638457 7385 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:37.638582 master-0 kubenswrapper[7385]: E0319 09:18:37.638564 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:38.138520854 +0000 UTC m=+13.812950645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : configmap "client-ca" not found Mar 19 09:18:37.663130 master-0 kubenswrapper[7385]: I0319 09:18:37.663077 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc924\" (UniqueName: \"kubernetes.io/projected/56aed905-9823-4165-9bcd-c4d7ce7bed90-kube-api-access-rc924\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:37.947090 master-0 kubenswrapper[7385]: I0319 09:18:37.947000 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:37.962806 master-0 kubenswrapper[7385]: I0319 09:18:37.962504 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:38.044385 master-0 kubenswrapper[7385]: I0319 09:18:38.044063 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6554\" (UniqueName: \"kubernetes.io/projected/787209c8-18f5-4312-916b-1de630270c4a-kube-api-access-d6554\") pod \"787209c8-18f5-4312-916b-1de630270c4a\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " Mar 19 09:18:38.050137 master-0 kubenswrapper[7385]: I0319 09:18:38.048065 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787209c8-18f5-4312-916b-1de630270c4a-kube-api-access-d6554" (OuterVolumeSpecName: "kube-api-access-d6554") pod "787209c8-18f5-4312-916b-1de630270c4a" (UID: "787209c8-18f5-4312-916b-1de630270c4a"). InnerVolumeSpecName "kube-api-access-d6554". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:18:38.145752 master-0 kubenswrapper[7385]: I0319 09:18:38.145599 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:38.145911 master-0 kubenswrapper[7385]: E0319 09:18:38.145810 7385 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:38.145911 master-0 kubenswrapper[7385]: E0319 09:18:38.145903 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:39.145874319 +0000 UTC m=+14.820304090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : secret "serving-cert" not found Mar 19 09:18:38.146123 master-0 kubenswrapper[7385]: I0319 09:18:38.146074 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:38.146245 master-0 kubenswrapper[7385]: I0319 09:18:38.146227 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6554\" (UniqueName: \"kubernetes.io/projected/787209c8-18f5-4312-916b-1de630270c4a-kube-api-access-d6554\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:38.146279 master-0 kubenswrapper[7385]: E0319 09:18:38.146243 7385 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:38.146338 master-0 kubenswrapper[7385]: E0319 09:18:38.146320 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:39.146300451 +0000 UTC m=+14.820730212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : configmap "client-ca" not found Mar 19 09:18:38.360612 master-0 kubenswrapper[7385]: I0319 09:18:38.359961 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:38.360612 master-0 kubenswrapper[7385]: I0319 09:18:38.360319 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:38.360612 master-0 kubenswrapper[7385]: I0319 09:18:38.360371 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:38.360612 master-0 kubenswrapper[7385]: E0319 09:18:38.360386 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:38.362078 master-0 kubenswrapper[7385]: I0319 09:18:38.360947 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:38.362078 master-0 kubenswrapper[7385]: I0319 09:18:38.361929 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:38.362078 master-0 kubenswrapper[7385]: E0319 09:18:38.361991 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:40.361964426 +0000 UTC m=+16.036394127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : configmap "client-ca" not found Mar 19 09:18:38.362668 master-0 kubenswrapper[7385]: E0319 09:18:38.362417 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:38.362668 master-0 kubenswrapper[7385]: E0319 09:18:38.362473 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert podName:787209c8-18f5-4312-916b-1de630270c4a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:40.36246104 +0000 UTC m=+16.036890741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert") pod "controller-manager-f5df8899c-wh8b2" (UID: "787209c8-18f5-4312-916b-1de630270c4a") : secret "serving-cert" not found Mar 19 09:18:38.367755 master-0 kubenswrapper[7385]: I0319 09:18:38.367640 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-wh8b2\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:38.463884 master-0 kubenswrapper[7385]: I0319 09:18:38.463028 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles\") pod \"787209c8-18f5-4312-916b-1de630270c4a\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " Mar 19 09:18:38.463884 master-0 kubenswrapper[7385]: I0319 09:18:38.463135 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config\") pod \"787209c8-18f5-4312-916b-1de630270c4a\" (UID: \"787209c8-18f5-4312-916b-1de630270c4a\") " Mar 19 09:18:38.464370 master-0 kubenswrapper[7385]: I0319 09:18:38.464207 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "787209c8-18f5-4312-916b-1de630270c4a" (UID: "787209c8-18f5-4312-916b-1de630270c4a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:38.464370 master-0 kubenswrapper[7385]: I0319 09:18:38.464344 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config" (OuterVolumeSpecName: "config") pod "787209c8-18f5-4312-916b-1de630270c4a" (UID: "787209c8-18f5-4312-916b-1de630270c4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:38.565458 master-0 kubenswrapper[7385]: I0319 09:18:38.564120 7385 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:38.565458 master-0 kubenswrapper[7385]: I0319 09:18:38.564158 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:38.956918 master-0 kubenswrapper[7385]: I0319 09:18:38.956846 7385 generic.go:334] "Generic (PLEG): container finished" podID="17e0cb4a-e776-4886-927e-ae446af7f234" containerID="a77554a501a64db0cbf8b7e5fc03fd9507d3d6aa78d1ae228437911712e2adbe" exitCode=0 Mar 19 09:18:38.958431 master-0 kubenswrapper[7385]: I0319 09:18:38.956911 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" event={"ID":"17e0cb4a-e776-4886-927e-ae446af7f234","Type":"ContainerDied","Data":"a77554a501a64db0cbf8b7e5fc03fd9507d3d6aa78d1ae228437911712e2adbe"} Mar 19 09:18:38.962130 master-0 kubenswrapper[7385]: I0319 09:18:38.961806 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerStarted","Data":"8c68ece13612c392b8986c6036f0fb5686c420aa3d85d8318f1363a956c12d2e"} Mar 19 09:18:38.962130 master-0 kubenswrapper[7385]: I0319 09:18:38.961858 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-wh8b2" Mar 19 09:18:39.031150 master-0 kubenswrapper[7385]: I0319 09:18:39.029162 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66f78ff458-g52z5"] Mar 19 09:18:39.031150 master-0 kubenswrapper[7385]: I0319 09:18:39.030106 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.033646 master-0 kubenswrapper[7385]: I0319 09:18:39.031807 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-wh8b2"] Mar 19 09:18:39.033646 master-0 kubenswrapper[7385]: I0319 09:18:39.032229 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 09:18:39.033646 master-0 kubenswrapper[7385]: I0319 09:18:39.032906 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-wh8b2"] Mar 19 09:18:39.033646 master-0 kubenswrapper[7385]: I0319 09:18:39.032998 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 09:18:39.033646 master-0 kubenswrapper[7385]: I0319 09:18:39.033058 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 09:18:39.039709 master-0 kubenswrapper[7385]: I0319 09:18:39.034863 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 09:18:39.039709 master-0 kubenswrapper[7385]: I0319 09:18:39.035008 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 09:18:39.044368 master-0 kubenswrapper[7385]: I0319 09:18:39.043709 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 09:18:39.044368 master-0 kubenswrapper[7385]: I0319 09:18:39.043833 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66f78ff458-g52z5"] Mar 19 09:18:39.069767 master-0 kubenswrapper[7385]: I0319 09:18:39.069717 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-config\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.069964 master-0 kubenswrapper[7385]: I0319 09:18:39.069802 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.069964 master-0 kubenswrapper[7385]: I0319 09:18:39.069897 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-proxy-ca-bundles\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.069964 master-0 kubenswrapper[7385]: I0319 09:18:39.069957 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.070100 master-0 kubenswrapper[7385]: I0319 09:18:39.069979 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9hk\" (UniqueName: \"kubernetes.io/projected/91beea57-b4b8-41cc-a969-addcf201e56f-kube-api-access-5q9hk\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.070100 master-0 kubenswrapper[7385]: I0319 09:18:39.070037 7385 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/787209c8-18f5-4312-916b-1de630270c4a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:39.070100 master-0 kubenswrapper[7385]: I0319 09:18:39.070053 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/787209c8-18f5-4312-916b-1de630270c4a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:39.092706 master-0 kubenswrapper[7385]: I0319 09:18:39.092645 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66f78ff458-g52z5"] Mar 19 09:18:39.093003 master-0 kubenswrapper[7385]: E0319 09:18:39.092969 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-5q9hk proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" podUID="91beea57-b4b8-41cc-a969-addcf201e56f" Mar 19 09:18:39.171111 master-0 kubenswrapper[7385]: I0319 09:18:39.171008 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.171111 master-0 kubenswrapper[7385]: I0319 09:18:39.171038 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:39.171216 master-0 kubenswrapper[7385]: I0319 09:18:39.171121 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-proxy-ca-bundles\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.171216 master-0 kubenswrapper[7385]: I0319 09:18:39.171164 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.171216 master-0 kubenswrapper[7385]: I0319 09:18:39.171181 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9hk\" (UniqueName: \"kubernetes.io/projected/91beea57-b4b8-41cc-a969-addcf201e56f-kube-api-access-5q9hk\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.171216 master-0 kubenswrapper[7385]: I0319 09:18:39.171203 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:39.171323 master-0 kubenswrapper[7385]: I0319 09:18:39.171230 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-config\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.172076 master-0 kubenswrapper[7385]: I0319 09:18:39.172050 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-config\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.172131 master-0 kubenswrapper[7385]: E0319 09:18:39.172106 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:39.172366 master-0 kubenswrapper[7385]: E0319 09:18:39.172346 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca podName:91beea57-b4b8-41cc-a969-addcf201e56f nodeName:}" failed. No retries permitted until 2026-03-19 09:18:39.672133392 +0000 UTC m=+15.346563093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca") pod "controller-manager-66f78ff458-g52z5" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f") : configmap "client-ca" not found Mar 19 09:18:39.172630 master-0 kubenswrapper[7385]: E0319 09:18:39.172608 7385 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:39.172903 master-0 kubenswrapper[7385]: E0319 09:18:39.172639 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:41.172631796 +0000 UTC m=+16.847061497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : secret "serving-cert" not found Mar 19 09:18:39.172903 master-0 kubenswrapper[7385]: E0319 09:18:39.172782 7385 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:39.172903 master-0 kubenswrapper[7385]: E0319 09:18:39.172826 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:41.172813232 +0000 UTC m=+16.847242933 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : configmap "client-ca" not found Mar 19 09:18:39.172903 master-0 kubenswrapper[7385]: E0319 09:18:39.172842 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:39.172903 master-0 kubenswrapper[7385]: E0319 09:18:39.172885 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert podName:91beea57-b4b8-41cc-a969-addcf201e56f nodeName:}" failed. No retries permitted until 2026-03-19 09:18:39.672872994 +0000 UTC m=+15.347302695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert") pod "controller-manager-66f78ff458-g52z5" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f") : secret "serving-cert" not found Mar 19 09:18:39.174053 master-0 kubenswrapper[7385]: I0319 09:18:39.173995 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-proxy-ca-bundles\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.189278 master-0 kubenswrapper[7385]: I0319 09:18:39.189206 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9hk\" (UniqueName: \"kubernetes.io/projected/91beea57-b4b8-41cc-a969-addcf201e56f-kube-api-access-5q9hk\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.538560 master-0 kubenswrapper[7385]: I0319 09:18:39.538495 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:39.681343 master-0 kubenswrapper[7385]: I0319 09:18:39.681299 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.681558 master-0 kubenswrapper[7385]: I0319 09:18:39.681518 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.681558 master-0 kubenswrapper[7385]: E0319 09:18:39.681494 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:39.681647 master-0 kubenswrapper[7385]: E0319 09:18:39.681604 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca podName:91beea57-b4b8-41cc-a969-addcf201e56f nodeName:}" failed. No retries permitted until 2026-03-19 09:18:40.681590748 +0000 UTC m=+16.356020449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca") pod "controller-manager-66f78ff458-g52z5" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f") : configmap "client-ca" not found Mar 19 09:18:39.681647 master-0 kubenswrapper[7385]: E0319 09:18:39.681618 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:39.681807 master-0 kubenswrapper[7385]: E0319 09:18:39.681776 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert podName:91beea57-b4b8-41cc-a969-addcf201e56f nodeName:}" failed. No retries permitted until 2026-03-19 09:18:40.681757173 +0000 UTC m=+16.356186904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert") pod "controller-manager-66f78ff458-g52z5" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f") : secret "serving-cert" not found Mar 19 09:18:39.969613 master-0 kubenswrapper[7385]: I0319 09:18:39.969096 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" event={"ID":"3eeb72c3-1a56-4955-845e-81607513b1b2","Type":"ContainerStarted","Data":"556b8199d58a6fd25827f76ad6a21a27661075b47dfc63370b879a39a427af4e"} Mar 19 09:18:39.970758 master-0 kubenswrapper[7385]: I0319 09:18:39.970726 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerStarted","Data":"8140af4cb4bb09d2ed5ad0f6ec653bbb3dc06a4515b9db389545823579fd212a"} Mar 19 09:18:39.970832 master-0 kubenswrapper[7385]: I0319 09:18:39.970800 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.981790 master-0 kubenswrapper[7385]: I0319 09:18:39.981764 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:39.985346 master-0 kubenswrapper[7385]: I0319 09:18:39.985324 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-config\") pod \"91beea57-b4b8-41cc-a969-addcf201e56f\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " Mar 19 09:18:39.985471 master-0 kubenswrapper[7385]: I0319 09:18:39.985459 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-proxy-ca-bundles\") pod \"91beea57-b4b8-41cc-a969-addcf201e56f\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " Mar 19 09:18:39.985747 master-0 kubenswrapper[7385]: I0319 09:18:39.985593 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q9hk\" (UniqueName: \"kubernetes.io/projected/91beea57-b4b8-41cc-a969-addcf201e56f-kube-api-access-5q9hk\") pod \"91beea57-b4b8-41cc-a969-addcf201e56f\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " Mar 19 09:18:39.985961 master-0 kubenswrapper[7385]: I0319 09:18:39.985936 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-config" (OuterVolumeSpecName: "config") pod "91beea57-b4b8-41cc-a969-addcf201e56f" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:39.986089 master-0 kubenswrapper[7385]: I0319 09:18:39.986054 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "91beea57-b4b8-41cc-a969-addcf201e56f" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:39.986461 master-0 kubenswrapper[7385]: I0319 09:18:39.986438 7385 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:39.986527 master-0 kubenswrapper[7385]: I0319 09:18:39.986466 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:39.991598 master-0 kubenswrapper[7385]: I0319 09:18:39.989967 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91beea57-b4b8-41cc-a969-addcf201e56f-kube-api-access-5q9hk" (OuterVolumeSpecName: "kube-api-access-5q9hk") pod "91beea57-b4b8-41cc-a969-addcf201e56f" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f"). InnerVolumeSpecName "kube-api-access-5q9hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:18:39.993912 master-0 kubenswrapper[7385]: I0319 09:18:39.993839 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" podStartSLOduration=0.937406855 podStartE2EDuration="3.993820941s" podCreationTimestamp="2026-03-19 09:18:36 +0000 UTC" firstStartedPulling="2026-03-19 09:18:36.698532972 +0000 UTC m=+12.372962673" lastFinishedPulling="2026-03-19 09:18:39.754947058 +0000 UTC m=+15.429376759" observedRunningTime="2026-03-19 09:18:39.992970736 +0000 UTC m=+15.667400467" watchObservedRunningTime="2026-03-19 09:18:39.993820941 +0000 UTC m=+15.668250662" Mar 19 09:18:40.087895 master-0 kubenswrapper[7385]: I0319 09:18:40.087789 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q9hk\" (UniqueName: \"kubernetes.io/projected/91beea57-b4b8-41cc-a969-addcf201e56f-kube-api-access-5q9hk\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:40.534660 master-0 kubenswrapper[7385]: I0319 09:18:40.534618 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787209c8-18f5-4312-916b-1de630270c4a" path="/var/lib/kubelet/pods/787209c8-18f5-4312-916b-1de630270c4a/volumes" Mar 19 09:18:40.697029 master-0 kubenswrapper[7385]: I0319 09:18:40.696955 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:40.697303 master-0 kubenswrapper[7385]: E0319 09:18:40.697243 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:40.697373 master-0 kubenswrapper[7385]: E0319 09:18:40.697313 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert podName:91beea57-b4b8-41cc-a969-addcf201e56f nodeName:}" failed. No retries permitted until 2026-03-19 09:18:42.697295468 +0000 UTC m=+18.371725169 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert") pod "controller-manager-66f78ff458-g52z5" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f") : secret "serving-cert" not found Mar 19 09:18:40.699014 master-0 kubenswrapper[7385]: I0319 09:18:40.698952 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca\") pod \"controller-manager-66f78ff458-g52z5\" (UID: \"91beea57-b4b8-41cc-a969-addcf201e56f\") " pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:40.699461 master-0 kubenswrapper[7385]: E0319 09:18:40.699423 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:40.699577 master-0 kubenswrapper[7385]: E0319 09:18:40.699518 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca podName:91beea57-b4b8-41cc-a969-addcf201e56f nodeName:}" failed. No retries permitted until 2026-03-19 09:18:42.699490111 +0000 UTC m=+18.373919852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca") pod "controller-manager-66f78ff458-g52z5" (UID: "91beea57-b4b8-41cc-a969-addcf201e56f") : configmap "client-ca" not found Mar 19 09:18:40.975665 master-0 kubenswrapper[7385]: I0319 09:18:40.975620 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" event={"ID":"3eeb72c3-1a56-4955-845e-81607513b1b2","Type":"ContainerStarted","Data":"c3ab9c2defe5d9f5455454d21b78a9265b39d85c50475f3ed72987a6f0c3c408"} Mar 19 09:18:40.979179 master-0 kubenswrapper[7385]: I0319 09:18:40.979155 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66f78ff458-g52z5" Mar 19 09:18:40.979737 master-0 kubenswrapper[7385]: I0319 09:18:40.979631 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-p9bbz" event={"ID":"672ad0aa-a0c5-4640-840d-3ffa02c55d62","Type":"ContainerStarted","Data":"5469678a1ed4b4a74981e6bbcbcbcf1d7ef2cb9aa55cac60e5c57582f5bdff70"} Mar 19 09:18:40.992735 master-0 kubenswrapper[7385]: I0319 09:18:40.992570 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" podStartSLOduration=2.693788293 podStartE2EDuration="5.992524891s" podCreationTimestamp="2026-03-19 09:18:35 +0000 UTC" firstStartedPulling="2026-03-19 09:18:36.463299055 +0000 UTC m=+12.137728756" lastFinishedPulling="2026-03-19 09:18:39.762035653 +0000 UTC m=+15.436465354" observedRunningTime="2026-03-19 09:18:40.992346246 +0000 UTC m=+16.666775987" watchObservedRunningTime="2026-03-19 09:18:40.992524891 +0000 UTC m=+16.666954592" Mar 19 09:18:41.054145 master-0 kubenswrapper[7385]: I0319 09:18:41.053595 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66f78ff458-g52z5"] Mar 19 09:18:41.054817 master-0 kubenswrapper[7385]: I0319 09:18:41.054400 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-658c4c5ff9-msbxc"] Mar 19 09:18:41.055164 master-0 kubenswrapper[7385]: I0319 09:18:41.055145 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.055596 master-0 kubenswrapper[7385]: I0319 09:18:41.055557 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66f78ff458-g52z5"] Mar 19 09:18:41.064692 master-0 kubenswrapper[7385]: I0319 09:18:41.058134 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 09:18:41.064692 master-0 kubenswrapper[7385]: I0319 09:18:41.059656 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 09:18:41.064692 master-0 kubenswrapper[7385]: I0319 09:18:41.059749 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 09:18:41.064692 master-0 kubenswrapper[7385]: I0319 09:18:41.059949 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 09:18:41.064692 master-0 kubenswrapper[7385]: I0319 09:18:41.060907 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658c4c5ff9-msbxc"] Mar 19 09:18:41.064692 master-0 kubenswrapper[7385]: I0319 09:18:41.060976 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 09:18:41.067038 master-0 kubenswrapper[7385]: I0319 09:18:41.067004 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 09:18:41.104479 master-0 kubenswrapper[7385]: I0319 09:18:41.104431 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-proxy-ca-bundles\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.104711 master-0 kubenswrapper[7385]: I0319 09:18:41.104525 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsmb\" (UniqueName: \"kubernetes.io/projected/a3aa997e-848b-4c05-8fad-cb9b3d832a59-kube-api-access-zvsmb\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.104711 master-0 kubenswrapper[7385]: I0319 09:18:41.104685 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.104800 master-0 kubenswrapper[7385]: I0319 09:18:41.104713 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.104800 master-0 kubenswrapper[7385]: I0319 09:18:41.104735 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-config\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.104862 master-0 kubenswrapper[7385]: I0319 09:18:41.104807 7385 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/91beea57-b4b8-41cc-a969-addcf201e56f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:41.104862 master-0 kubenswrapper[7385]: I0319 09:18:41.104825 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91beea57-b4b8-41cc-a969-addcf201e56f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:41.205325 master-0 kubenswrapper[7385]: I0319 09:18:41.205275 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:41.205592 master-0 kubenswrapper[7385]: I0319 09:18:41.205347 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.205592 master-0 kubenswrapper[7385]: E0319 09:18:41.205493 7385 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:41.205592 master-0 kubenswrapper[7385]: I0319 09:18:41.205515 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.205719 master-0 kubenswrapper[7385]: E0319 09:18:41.205600 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:45.20557786 +0000 UTC m=+20.880007611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : configmap "client-ca" not found Mar 19 09:18:41.205719 master-0 kubenswrapper[7385]: E0319 09:18:41.205617 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:41.205719 master-0 kubenswrapper[7385]: I0319 09:18:41.205624 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-config\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.205719 master-0 kubenswrapper[7385]: E0319 09:18:41.205676 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:41.705657852 +0000 UTC m=+17.380087633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : secret "serving-cert" not found Mar 19 09:18:41.205888 master-0 kubenswrapper[7385]: E0319 09:18:41.205783 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:41.205888 master-0 kubenswrapper[7385]: E0319 09:18:41.205883 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:41.705875358 +0000 UTC m=+17.380305059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : configmap "client-ca" not found Mar 19 09:18:41.206184 master-0 kubenswrapper[7385]: I0319 09:18:41.206146 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-proxy-ca-bundles\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.206317 master-0 kubenswrapper[7385]: I0319 09:18:41.206280 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:41.206397 master-0 kubenswrapper[7385]: I0319 09:18:41.206358 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsmb\" (UniqueName: \"kubernetes.io/projected/a3aa997e-848b-4c05-8fad-cb9b3d832a59-kube-api-access-zvsmb\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.207013 master-0 kubenswrapper[7385]: E0319 09:18:41.206804 7385 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:41.207013 master-0 kubenswrapper[7385]: E0319 09:18:41.206858 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:45.206837647 +0000 UTC m=+20.881267348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : secret "serving-cert" not found Mar 19 09:18:41.208759 master-0 kubenswrapper[7385]: I0319 09:18:41.208695 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-config\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.212467 master-0 kubenswrapper[7385]: I0319 09:18:41.212420 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-proxy-ca-bundles\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.223679 master-0 kubenswrapper[7385]: I0319 09:18:41.223643 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsmb\" (UniqueName: \"kubernetes.io/projected/a3aa997e-848b-4c05-8fad-cb9b3d832a59-kube-api-access-zvsmb\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.712475 master-0 kubenswrapper[7385]: I0319 09:18:41.712353 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.712475 master-0 kubenswrapper[7385]: I0319 09:18:41.712420 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:41.712728 master-0 kubenswrapper[7385]: E0319 09:18:41.712531 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:41.712728 master-0 kubenswrapper[7385]: E0319 09:18:41.712619 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:42.712598706 +0000 UTC m=+18.387028417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : secret "serving-cert" not found Mar 19 09:18:41.713649 master-0 kubenswrapper[7385]: E0319 09:18:41.712970 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:41.713649 master-0 kubenswrapper[7385]: E0319 09:18:41.713093 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:42.713068779 +0000 UTC m=+18.387498560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : configmap "client-ca" not found Mar 19 09:18:41.985143 master-0 kubenswrapper[7385]: I0319 09:18:41.985081 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" event={"ID":"17e0cb4a-e776-4886-927e-ae446af7f234","Type":"ContainerStarted","Data":"c30f2036341c158a4a311a14ce582436d41a1a42842791b6c421ca4a779f1492"} Mar 19 09:18:42.534977 master-0 kubenswrapper[7385]: I0319 09:18:42.534923 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91beea57-b4b8-41cc-a969-addcf201e56f" path="/var/lib/kubelet/pods/91beea57-b4b8-41cc-a969-addcf201e56f/volumes" Mar 19 09:18:42.545150 master-0 kubenswrapper[7385]: I0319 09:18:42.545107 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:18:42.735167 master-0 kubenswrapper[7385]: I0319 09:18:42.734835 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:42.735373 master-0 kubenswrapper[7385]: I0319 09:18:42.735177 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:42.735373 master-0 kubenswrapper[7385]: E0319 09:18:42.735047 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:42.735373 master-0 kubenswrapper[7385]: E0319 09:18:42.735353 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:44.735333688 +0000 UTC m=+20.409763399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : secret "serving-cert" not found Mar 19 09:18:42.735488 master-0 kubenswrapper[7385]: E0319 09:18:42.735426 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:42.735519 master-0 kubenswrapper[7385]: E0319 09:18:42.735507 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:44.735488162 +0000 UTC m=+20.409917853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : configmap "client-ca" not found Mar 19 09:18:43.950331 master-0 kubenswrapper[7385]: I0319 09:18:43.950264 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:43.950331 master-0 kubenswrapper[7385]: I0319 09:18:43.950332 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950381 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950413 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950440 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950475 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950511 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950534 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950582 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950609 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950631 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950653 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950682 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950706 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.950755 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.950832 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.950813105 +0000 UTC m=+35.625242796 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: I0319 09:18:43.950862 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.950916 7385 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.950936 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.950930538 +0000 UTC m=+35.625360239 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.950971 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.950989 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.950983 +0000 UTC m=+35.625412701 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "node-tuning-operator-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.950985 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.951019 7385 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.951025 7385 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.951059 7385 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.951036 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls podName:45523224-f530-4354-90de-7fd65a1a3911 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951031751 +0000 UTC m=+35.625461452 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls") pod "dns-operator-9c5679d8f-k89rz" (UID: "45523224-f530-4354-90de-7fd65a1a3911") : secret "metrics-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.951099 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:43.951081 master-0 kubenswrapper[7385]: E0319 09:18:43.951100 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951082002 +0000 UTC m=+35.625511763 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951118 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951110793 +0000 UTC m=+35.625540614 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951132 7385 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951137 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951126344 +0000 UTC m=+35.625556165 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : secret "metrics-daemon-secret" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951161 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951152724 +0000 UTC m=+35.625582545 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951168 7385 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951177 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert podName:16c631c1-277e-47d2-9377-a0bbd14673d4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951168695 +0000 UTC m=+35.625598506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert") pod "cluster-version-operator-56d8475767-vmv8d" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4") : secret "cluster-version-operator-serving-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951194 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951186115 +0000 UTC m=+35.625615916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951141 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951201 7385 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951224 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951216756 +0000 UTC m=+35.625646457 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951206 7385 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951240 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951232957 +0000 UTC m=+35.625662768 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951227 7385 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951257 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951249637 +0000 UTC m=+35.625679338 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951270 7385 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951273 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951266728 +0000 UTC m=+35.625696539 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: I0319 09:18:43.951346 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951372 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951393 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951387941 +0000 UTC m=+35.625817642 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:43.951989 master-0 kubenswrapper[7385]: E0319 09:18:43.951463 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert podName:a57648b5-1a08-49a7-bedb-f7c1e54d92b4 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:59.951438172 +0000 UTC m=+35.625867913 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-8mpp9" (UID: "a57648b5-1a08-49a7-bedb-f7c1e54d92b4") : secret "performance-addon-operator-webhook-cert" not found Mar 19 09:18:44.759932 master-0 kubenswrapper[7385]: I0319 09:18:44.759796 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:44.760113 master-0 kubenswrapper[7385]: E0319 09:18:44.759945 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:44.760113 master-0 kubenswrapper[7385]: E0319 09:18:44.760026 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:48.760003714 +0000 UTC m=+24.434433525 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : secret "serving-cert" not found Mar 19 09:18:44.760113 master-0 kubenswrapper[7385]: I0319 09:18:44.760033 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:44.760213 master-0 kubenswrapper[7385]: E0319 09:18:44.760192 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:44.760245 master-0 kubenswrapper[7385]: E0319 09:18:44.760221 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:48.7602127 +0000 UTC m=+24.434642401 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : configmap "client-ca" not found Mar 19 09:18:45.268273 master-0 kubenswrapper[7385]: I0319 09:18:45.267943 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:45.269487 master-0 kubenswrapper[7385]: E0319 09:18:45.268079 7385 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:45.269487 master-0 kubenswrapper[7385]: I0319 09:18:45.268364 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:45.269487 master-0 kubenswrapper[7385]: E0319 09:18:45.268402 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:53.268381738 +0000 UTC m=+28.942811439 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : secret "serving-cert" not found Mar 19 09:18:45.269487 master-0 kubenswrapper[7385]: E0319 09:18:45.268459 7385 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:45.269487 master-0 kubenswrapper[7385]: E0319 09:18:45.268492 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:53.268482331 +0000 UTC m=+28.942912032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : configmap "client-ca" not found Mar 19 09:18:46.000173 master-0 kubenswrapper[7385]: I0319 09:18:46.000123 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" event={"ID":"fe1881fb-c670-442a-a092-c1eee6b7d5e5","Type":"ContainerStarted","Data":"68fbf6321802565874265d19454cbc64b4b4b521a0e102ded43536ee428b4258"} Mar 19 09:18:46.681293 master-0 kubenswrapper[7385]: I0319 09:18:46.680879 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-587b98fbb9-l7x24"] Mar 19 09:18:46.687738 master-0 kubenswrapper[7385]: I0319 09:18:46.682211 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691397 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691451 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691483 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691451 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691460 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691591 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691630 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.691762 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 09:18:46.692086 master-0 kubenswrapper[7385]: I0319 09:18:46.692007 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 09:18:46.700484 master-0 kubenswrapper[7385]: I0319 09:18:46.700307 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-587b98fbb9-l7x24"] Mar 19 09:18:46.703592 master-0 kubenswrapper[7385]: I0319 09:18:46.703383 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 09:18:46.787578 master-0 kubenswrapper[7385]: I0319 09:18:46.787487 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit-dir\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.787848 master-0 kubenswrapper[7385]: I0319 09:18:46.787632 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-encryption-config\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.787848 master-0 kubenswrapper[7385]: I0319 09:18:46.787697 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-image-import-ca\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.787848 master-0 kubenswrapper[7385]: I0319 09:18:46.787734 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-serving-ca\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.787848 master-0 kubenswrapper[7385]: I0319 09:18:46.787755 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.787848 master-0 kubenswrapper[7385]: I0319 09:18:46.787778 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-node-pullsecrets\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.787848 master-0 kubenswrapper[7385]: I0319 09:18:46.787799 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-client\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.787848 master-0 kubenswrapper[7385]: I0319 09:18:46.787836 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzbz\" (UniqueName: \"kubernetes.io/projected/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-kube-api-access-pnzbz\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.788331 master-0 kubenswrapper[7385]: I0319 09:18:46.787860 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-trusted-ca-bundle\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.788331 master-0 kubenswrapper[7385]: I0319 09:18:46.787914 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.788331 master-0 kubenswrapper[7385]: I0319 09:18:46.787952 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-config\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.888813 master-0 kubenswrapper[7385]: I0319 09:18:46.888765 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit-dir\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889115 master-0 kubenswrapper[7385]: I0319 09:18:46.888882 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit-dir\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889210 master-0 kubenswrapper[7385]: I0319 09:18:46.889194 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-encryption-config\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889341 master-0 kubenswrapper[7385]: I0319 09:18:46.889325 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-image-import-ca\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889438 master-0 kubenswrapper[7385]: I0319 09:18:46.889425 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-serving-ca\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889514 master-0 kubenswrapper[7385]: I0319 09:18:46.889501 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889749 master-0 kubenswrapper[7385]: I0319 09:18:46.889732 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-node-pullsecrets\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889831 master-0 kubenswrapper[7385]: I0319 09:18:46.889818 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-client\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.889928 master-0 kubenswrapper[7385]: I0319 09:18:46.889914 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnzbz\" (UniqueName: \"kubernetes.io/projected/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-kube-api-access-pnzbz\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.890038 master-0 kubenswrapper[7385]: I0319 09:18:46.890024 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-trusted-ca-bundle\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.890123 master-0 kubenswrapper[7385]: I0319 09:18:46.890110 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.890227 master-0 kubenswrapper[7385]: I0319 09:18:46.890212 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-config\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.890350 master-0 kubenswrapper[7385]: I0319 09:18:46.890250 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-node-pullsecrets\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.890505 master-0 kubenswrapper[7385]: E0319 09:18:46.890440 7385 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 09:18:46.890589 master-0 kubenswrapper[7385]: E0319 09:18:46.890573 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:47.390519827 +0000 UTC m=+23.064949598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : configmap "audit-0" not found Mar 19 09:18:46.890644 master-0 kubenswrapper[7385]: E0319 09:18:46.890588 7385 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 09:18:46.890692 master-0 kubenswrapper[7385]: E0319 09:18:46.890647 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:47.390628601 +0000 UTC m=+23.065058412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : secret "serving-cert" not found Mar 19 09:18:46.891238 master-0 kubenswrapper[7385]: I0319 09:18:46.891208 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-image-import-ca\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.894561 master-0 kubenswrapper[7385]: I0319 09:18:46.891415 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-trusted-ca-bundle\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.894561 master-0 kubenswrapper[7385]: I0319 09:18:46.891687 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-serving-ca\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.894561 master-0 kubenswrapper[7385]: I0319 09:18:46.892581 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-config\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.896414 master-0 kubenswrapper[7385]: I0319 09:18:46.896379 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-encryption-config\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.896885 master-0 kubenswrapper[7385]: I0319 09:18:46.896863 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-client\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:46.928564 master-0 kubenswrapper[7385]: I0319 09:18:46.928282 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnzbz\" (UniqueName: \"kubernetes.io/projected/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-kube-api-access-pnzbz\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:47.003845 master-0 kubenswrapper[7385]: I0319 09:18:47.003631 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" event={"ID":"012cdc1d-ebc8-431e-9a52-9a39de95dd0d","Type":"ContainerStarted","Data":"7f84fbd703825db689c03d2baee5e05e0406b0c7857947e23dfe9649aed6fbc3"} Mar 19 09:18:47.004736 master-0 kubenswrapper[7385]: I0319 09:18:47.004703 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerStarted","Data":"13eaf9fb6b5973dc7a39cf4a595a1daae2d0c0b608e70d2c41f378466d42eb35"} Mar 19 09:18:47.397992 master-0 kubenswrapper[7385]: I0319 09:18:47.397948 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:47.397992 master-0 kubenswrapper[7385]: I0319 09:18:47.398004 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:47.398340 master-0 kubenswrapper[7385]: E0319 09:18:47.398143 7385 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 09:18:47.398340 master-0 kubenswrapper[7385]: E0319 09:18:47.398223 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:48.398209376 +0000 UTC m=+24.072639077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : secret "serving-cert" not found Mar 19 09:18:47.398340 master-0 kubenswrapper[7385]: E0319 09:18:47.398282 7385 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 09:18:47.398340 master-0 kubenswrapper[7385]: E0319 09:18:47.398319 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:48.398310639 +0000 UTC m=+24.072740340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : configmap "audit-0" not found Mar 19 09:18:48.415084 master-0 kubenswrapper[7385]: I0319 09:18:48.410768 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:48.415084 master-0 kubenswrapper[7385]: E0319 09:18:48.410989 7385 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 09:18:48.415084 master-0 kubenswrapper[7385]: E0319 09:18:48.411095 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:50.411071743 +0000 UTC m=+26.085501454 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : secret "serving-cert" not found Mar 19 09:18:48.415084 master-0 kubenswrapper[7385]: I0319 09:18:48.411175 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:48.415084 master-0 kubenswrapper[7385]: E0319 09:18:48.411633 7385 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 09:18:48.415084 master-0 kubenswrapper[7385]: E0319 09:18:48.411674 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:50.41166325 +0000 UTC m=+26.086092951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : configmap "audit-0" not found Mar 19 09:18:48.816190 master-0 kubenswrapper[7385]: I0319 09:18:48.816034 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:48.816492 master-0 kubenswrapper[7385]: I0319 09:18:48.816467 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:48.816790 master-0 kubenswrapper[7385]: E0319 09:18:48.816224 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:48.816873 master-0 kubenswrapper[7385]: E0319 09:18:48.816844 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:56.8168193 +0000 UTC m=+32.491249021 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : secret "serving-cert" not found Mar 19 09:18:48.816873 master-0 kubenswrapper[7385]: E0319 09:18:48.816600 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:48.816997 master-0 kubenswrapper[7385]: E0319 09:18:48.816894 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:56.816885662 +0000 UTC m=+32.491315373 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : configmap "client-ca" not found Mar 19 09:18:50.438759 master-0 kubenswrapper[7385]: I0319 09:18:50.438088 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:50.438759 master-0 kubenswrapper[7385]: I0319 09:18:50.438435 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:50.438759 master-0 kubenswrapper[7385]: E0319 09:18:50.438276 7385 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 09:18:50.438759 master-0 kubenswrapper[7385]: E0319 09:18:50.438521 7385 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 09:18:50.438759 master-0 kubenswrapper[7385]: E0319 09:18:50.438619 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:54.438596779 +0000 UTC m=+30.113026560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : secret "serving-cert" not found Mar 19 09:18:50.438759 master-0 kubenswrapper[7385]: E0319 09:18:50.438704 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:54.438686291 +0000 UTC m=+30.113115992 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : configmap "audit-0" not found Mar 19 09:18:52.022282 master-0 kubenswrapper[7385]: I0319 09:18:52.022234 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" event={"ID":"525b41b5-82d8-4d47-8350-79644a2c9360","Type":"ContainerStarted","Data":"24b10bdbe30c7b6a34e02317c7a4fad144a2b0ece63d82300dc1de99318fd6fe"} Mar 19 09:18:52.409829 master-0 kubenswrapper[7385]: I0319 09:18:52.409747 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:52.410081 master-0 kubenswrapper[7385]: I0319 09:18:52.409963 7385 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:18:52.424841 master-0 kubenswrapper[7385]: I0319 09:18:52.424794 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:18:53.180583 master-0 kubenswrapper[7385]: I0319 09:18:53.177474 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 09:18:53.180583 master-0 kubenswrapper[7385]: I0319 09:18:53.178070 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.186583 master-0 kubenswrapper[7385]: I0319 09:18:53.185200 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 19 09:18:53.191565 master-0 kubenswrapper[7385]: I0319 09:18:53.189195 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 09:18:53.247581 master-0 kubenswrapper[7385]: I0319 09:18:53.246080 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-587b98fbb9-l7x24"] Mar 19 09:18:53.247581 master-0 kubenswrapper[7385]: E0319 09:18:53.246397 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" podUID="1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: I0319 09:18:53.276402 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: I0319 09:18:53.276594 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: I0319 09:18:53.276632 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-var-lock\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: I0319 09:18:53.276766 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: I0319 09:18:53.276815 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9120887f-15a9-45e1-846d-dd85a5949ebb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: E0319 09:18:53.276969 7385 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: E0319 09:18:53.277020 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:09.276999917 +0000 UTC m=+44.951429618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : secret "serving-cert" not found Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: E0319 09:18:53.277389 7385 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:53.277565 master-0 kubenswrapper[7385]: E0319 09:18:53.277420 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:09.277411278 +0000 UTC m=+44.951840979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : configmap "client-ca" not found Mar 19 09:18:53.380569 master-0 kubenswrapper[7385]: I0319 09:18:53.377928 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-var-lock\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.380569 master-0 kubenswrapper[7385]: I0319 09:18:53.378009 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.380569 master-0 kubenswrapper[7385]: I0319 09:18:53.378061 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9120887f-15a9-45e1-846d-dd85a5949ebb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.380569 master-0 kubenswrapper[7385]: I0319 09:18:53.378389 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-var-lock\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.380569 master-0 kubenswrapper[7385]: I0319 09:18:53.378419 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.468567 master-0 kubenswrapper[7385]: I0319 09:18:53.468457 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb"] Mar 19 09:18:53.469136 master-0 kubenswrapper[7385]: I0319 09:18:53.469115 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.475216 master-0 kubenswrapper[7385]: I0319 09:18:53.475179 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 19 09:18:53.475408 master-0 kubenswrapper[7385]: I0319 09:18:53.475294 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 19 09:18:53.477714 master-0 kubenswrapper[7385]: I0319 09:18:53.477425 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 19 09:18:53.480858 master-0 kubenswrapper[7385]: I0319 09:18:53.480476 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9120887f-15a9-45e1-846d-dd85a5949ebb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.487533 master-0 kubenswrapper[7385]: I0319 09:18:53.487489 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 19 09:18:53.512872 master-0 kubenswrapper[7385]: I0319 09:18:53.512799 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb"] Mar 19 09:18:53.513767 master-0 kubenswrapper[7385]: I0319 09:18:53.513734 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv"] Mar 19 09:18:53.514369 master-0 kubenswrapper[7385]: I0319 09:18:53.514351 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.518406 master-0 kubenswrapper[7385]: I0319 09:18:53.518360 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 19 09:18:53.518752 master-0 kubenswrapper[7385]: I0319 09:18:53.518729 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 19 09:18:53.523369 master-0 kubenswrapper[7385]: I0319 09:18:53.523341 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 19 09:18:53.529330 master-0 kubenswrapper[7385]: I0319 09:18:53.529289 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:18:53.532077 master-0 kubenswrapper[7385]: I0319 09:18:53.532038 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv"] Mar 19 09:18:53.579744 master-0 kubenswrapper[7385]: I0319 09:18:53.579558 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.579744 master-0 kubenswrapper[7385]: I0319 09:18:53.579633 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d58c6b38-ef11-465c-9fee-b83b84ce4669-cache\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.579744 master-0 kubenswrapper[7385]: I0319 09:18:53.579667 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6m8\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-kube-api-access-bs6m8\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.579744 master-0 kubenswrapper[7385]: I0319 09:18:53.579733 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.580076 master-0 kubenswrapper[7385]: I0319 09:18:53.579787 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.580076 master-0 kubenswrapper[7385]: I0319 09:18:53.579814 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.580076 master-0 kubenswrapper[7385]: I0319 09:18:53.579833 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-cache\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.580076 master-0 kubenswrapper[7385]: I0319 09:18:53.579853 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.580076 master-0 kubenswrapper[7385]: I0319 09:18:53.579869 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrmjf\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-kube-api-access-rrmjf\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.580076 master-0 kubenswrapper[7385]: I0319 09:18:53.579897 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.580076 master-0 kubenswrapper[7385]: I0319 09:18:53.579913 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.584883 master-0 kubenswrapper[7385]: I0319 09:18:53.584108 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-4lbsc"] Mar 19 09:18:53.584883 master-0 kubenswrapper[7385]: I0319 09:18:53.584786 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.587409 master-0 kubenswrapper[7385]: I0319 09:18:53.587373 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 09:18:53.587517 master-0 kubenswrapper[7385]: I0319 09:18:53.587421 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 09:18:53.587611 master-0 kubenswrapper[7385]: I0319 09:18:53.587601 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 09:18:53.587746 master-0 kubenswrapper[7385]: I0319 09:18:53.587653 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 09:18:53.591629 master-0 kubenswrapper[7385]: I0319 09:18:53.591355 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-4lbsc"] Mar 19 09:18:53.682977 master-0 kubenswrapper[7385]: I0319 09:18:53.682921 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.683277 master-0 kubenswrapper[7385]: I0319 09:18:53.683196 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.683373 master-0 kubenswrapper[7385]: I0319 09:18:53.683350 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d58c6b38-ef11-465c-9fee-b83b84ce4669-cache\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.683445 master-0 kubenswrapper[7385]: I0319 09:18:53.683422 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6m8\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-kube-api-access-bs6m8\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.683584 master-0 kubenswrapper[7385]: I0319 09:18:53.683559 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-cabundle\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.691586 master-0 kubenswrapper[7385]: I0319 09:18:53.691531 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d58c6b38-ef11-465c-9fee-b83b84ce4669-cache\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.699228 master-0 kubenswrapper[7385]: I0319 09:18:53.699187 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.699394 master-0 kubenswrapper[7385]: I0319 09:18:53.699282 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9t7v\" (UniqueName: \"kubernetes.io/projected/fed75514-8f48-40b7-9fed-0afd6042cfbf-kube-api-access-h9t7v\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.699394 master-0 kubenswrapper[7385]: I0319 09:18:53.699305 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.699394 master-0 kubenswrapper[7385]: I0319 09:18:53.699343 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.699394 master-0 kubenswrapper[7385]: I0319 09:18:53.699361 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-cache\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.699394 master-0 kubenswrapper[7385]: I0319 09:18:53.699375 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-key\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.699604 master-0 kubenswrapper[7385]: I0319 09:18:53.699395 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.699604 master-0 kubenswrapper[7385]: I0319 09:18:53.699413 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrmjf\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-kube-api-access-rrmjf\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.699604 master-0 kubenswrapper[7385]: I0319 09:18:53.699449 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.699699 master-0 kubenswrapper[7385]: I0319 09:18:53.699689 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.699827 master-0 kubenswrapper[7385]: I0319 09:18:53.699782 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.700285 master-0 kubenswrapper[7385]: I0319 09:18:53.700256 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.700370 master-0 kubenswrapper[7385]: I0319 09:18:53.700338 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-cache\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.700424 master-0 kubenswrapper[7385]: E0319 09:18:53.700400 7385 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 19 09:18:53.700464 master-0 kubenswrapper[7385]: E0319 09:18:53.700453 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs podName:d58c6b38-ef11-465c-9fee-b83b84ce4669 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:54.200440362 +0000 UTC m=+29.874870063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-rgzxb" (UID: "d58c6b38-ef11-465c-9fee-b83b84ce4669") : secret "catalogserver-cert" not found Mar 19 09:18:53.700512 master-0 kubenswrapper[7385]: I0319 09:18:53.700496 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.702998 master-0 kubenswrapper[7385]: I0319 09:18:53.702972 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6m8\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-kube-api-access-bs6m8\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.704873 master-0 kubenswrapper[7385]: I0319 09:18:53.704841 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:53.704971 master-0 kubenswrapper[7385]: I0319 09:18:53.704838 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.719084 master-0 kubenswrapper[7385]: I0319 09:18:53.718924 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrmjf\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-kube-api-access-rrmjf\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.730045 master-0 kubenswrapper[7385]: I0319 09:18:53.729889 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 09:18:53.803233 master-0 kubenswrapper[7385]: I0319 09:18:53.803187 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-cabundle\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.803466 master-0 kubenswrapper[7385]: I0319 09:18:53.803444 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9t7v\" (UniqueName: \"kubernetes.io/projected/fed75514-8f48-40b7-9fed-0afd6042cfbf-kube-api-access-h9t7v\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.803603 master-0 kubenswrapper[7385]: I0319 09:18:53.803575 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-key\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.804812 master-0 kubenswrapper[7385]: I0319 09:18:53.804779 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-cabundle\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.807305 master-0 kubenswrapper[7385]: I0319 09:18:53.807262 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-key\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.831002 master-0 kubenswrapper[7385]: I0319 09:18:53.830670 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:53.831261 master-0 kubenswrapper[7385]: I0319 09:18:53.831216 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9t7v\" (UniqueName: \"kubernetes.io/projected/fed75514-8f48-40b7-9fed-0afd6042cfbf-kube-api-access-h9t7v\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:53.913975 master-0 kubenswrapper[7385]: I0319 09:18:53.907834 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:18:54.025498 master-0 kubenswrapper[7385]: I0319 09:18:54.025434 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv"] Mar 19 09:18:54.036317 master-0 kubenswrapper[7385]: W0319 09:18:54.036271 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5d9fbaf_ba14_4d2b_8376_1634eabbc782.slice/crio-d153f8589c77234f9dc34525d12bab7d6b406888e2e51c22abf001583537f5c4 WatchSource:0}: Error finding container d153f8589c77234f9dc34525d12bab7d6b406888e2e51c22abf001583537f5c4: Status 404 returned error can't find the container with id d153f8589c77234f9dc34525d12bab7d6b406888e2e51c22abf001583537f5c4 Mar 19 09:18:54.041247 master-0 kubenswrapper[7385]: I0319 09:18:54.041204 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9120887f-15a9-45e1-846d-dd85a5949ebb","Type":"ContainerStarted","Data":"fc86f8c86cd7588ac0d5a124324c5dfeabbc5d914701e9c7cc4367a57ec98e9a"} Mar 19 09:18:54.041314 master-0 kubenswrapper[7385]: I0319 09:18:54.041264 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:54.048669 master-0 kubenswrapper[7385]: I0319 09:18:54.048085 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:54.107406 master-0 kubenswrapper[7385]: I0319 09:18:54.107355 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-4lbsc"] Mar 19 09:18:54.111634 master-0 kubenswrapper[7385]: W0319 09:18:54.111590 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfed75514_8f48_40b7_9fed_0afd6042cfbf.slice/crio-ce62aa530e9de7b740f93aac76703fc3a80b1ed5e0bbed25b7228c7b762d272f WatchSource:0}: Error finding container ce62aa530e9de7b740f93aac76703fc3a80b1ed5e0bbed25b7228c7b762d272f: Status 404 returned error can't find the container with id ce62aa530e9de7b740f93aac76703fc3a80b1ed5e0bbed25b7228c7b762d272f Mar 19 09:18:54.234041 master-0 kubenswrapper[7385]: I0319 09:18:54.233996 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-trusted-ca-bundle\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234042 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit-dir\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234079 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-image-import-ca\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234206 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-config\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234232 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-serving-ca\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234233 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234261 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-client\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234292 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnzbz\" (UniqueName: \"kubernetes.io/projected/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-kube-api-access-pnzbz\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234323 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-encryption-config\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234349 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-node-pullsecrets\") pod \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234590 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234721 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234729 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-config" (OuterVolumeSpecName: "config") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234768 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234807 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.234862 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: I0319 09:18:54.235197 7385 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: E0319 09:18:54.235228 7385 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 19 09:18:54.235344 master-0 kubenswrapper[7385]: E0319 09:18:54.235286 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs podName:d58c6b38-ef11-465c-9fee-b83b84ce4669 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:55.235266759 +0000 UTC m=+30.909696460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-rgzxb" (UID: "d58c6b38-ef11-465c-9fee-b83b84ce4669") : secret "catalogserver-cert" not found Mar 19 09:18:54.240895 master-0 kubenswrapper[7385]: I0319 09:18:54.239617 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-kube-api-access-pnzbz" (OuterVolumeSpecName: "kube-api-access-pnzbz") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "kube-api-access-pnzbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:18:54.240895 master-0 kubenswrapper[7385]: I0319 09:18:54.240009 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:18:54.240895 master-0 kubenswrapper[7385]: I0319 09:18:54.240634 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:18:54.338458 master-0 kubenswrapper[7385]: I0319 09:18:54.338110 7385 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.338458 master-0 kubenswrapper[7385]: I0319 09:18:54.338444 7385 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.338458 master-0 kubenswrapper[7385]: I0319 09:18:54.338456 7385 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.338458 master-0 kubenswrapper[7385]: I0319 09:18:54.338466 7385 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.338458 master-0 kubenswrapper[7385]: I0319 09:18:54.338478 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.338904 master-0 kubenswrapper[7385]: I0319 09:18:54.338486 7385 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.338904 master-0 kubenswrapper[7385]: I0319 09:18:54.338495 7385 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.338904 master-0 kubenswrapper[7385]: I0319 09:18:54.338504 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnzbz\" (UniqueName: \"kubernetes.io/projected/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-kube-api-access-pnzbz\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:54.439706 master-0 kubenswrapper[7385]: I0319 09:18:54.439526 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:54.439706 master-0 kubenswrapper[7385]: I0319 09:18:54.439626 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit\") pod \"apiserver-587b98fbb9-l7x24\" (UID: \"1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7\") " pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:54.439963 master-0 kubenswrapper[7385]: E0319 09:18:54.439892 7385 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 19 09:18:54.440091 master-0 kubenswrapper[7385]: E0319 09:18:54.439949 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:02.439932987 +0000 UTC m=+38.114362688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : configmap "audit-0" not found Mar 19 09:18:54.442998 master-0 kubenswrapper[7385]: E0319 09:18:54.442937 7385 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 09:18:54.443138 master-0 kubenswrapper[7385]: E0319 09:18:54.443034 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert podName:1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:02.443009984 +0000 UTC m=+38.117439755 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert") pod "apiserver-587b98fbb9-l7x24" (UID: "1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7") : secret "serving-cert" not found Mar 19 09:18:55.048027 master-0 kubenswrapper[7385]: I0319 09:18:55.047977 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" event={"ID":"fed75514-8f48-40b7-9fed-0afd6042cfbf","Type":"ContainerStarted","Data":"fb94cc236c27d9ae2255663fca024f5b90148e514af1cb8c7ed1eaef28fc1582"} Mar 19 09:18:55.048027 master-0 kubenswrapper[7385]: I0319 09:18:55.048021 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" event={"ID":"fed75514-8f48-40b7-9fed-0afd6042cfbf","Type":"ContainerStarted","Data":"ce62aa530e9de7b740f93aac76703fc3a80b1ed5e0bbed25b7228c7b762d272f"} Mar 19 09:18:55.049602 master-0 kubenswrapper[7385]: I0319 09:18:55.049576 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9120887f-15a9-45e1-846d-dd85a5949ebb","Type":"ContainerStarted","Data":"c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d"} Mar 19 09:18:55.052795 master-0 kubenswrapper[7385]: I0319 09:18:55.052767 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-587b98fbb9-l7x24" Mar 19 09:18:55.052938 master-0 kubenswrapper[7385]: I0319 09:18:55.052815 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" event={"ID":"d5d9fbaf-ba14-4d2b-8376-1634eabbc782","Type":"ContainerStarted","Data":"bbed6bca8d39576b98029b1913fd15c019012d9c7e21fd5f96699c0b40824ef5"} Mar 19 09:18:55.052938 master-0 kubenswrapper[7385]: I0319 09:18:55.052904 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" event={"ID":"d5d9fbaf-ba14-4d2b-8376-1634eabbc782","Type":"ContainerStarted","Data":"02033eb14ea31d2437ce887b5f2e88f1b7e843f260536c63c7e107349723d088"} Mar 19 09:18:55.053062 master-0 kubenswrapper[7385]: I0319 09:18:55.052936 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" event={"ID":"d5d9fbaf-ba14-4d2b-8376-1634eabbc782","Type":"ContainerStarted","Data":"d153f8589c77234f9dc34525d12bab7d6b406888e2e51c22abf001583537f5c4"} Mar 19 09:18:55.076364 master-0 kubenswrapper[7385]: I0319 09:18:55.075732 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" podStartSLOduration=2.075715749 podStartE2EDuration="2.075715749s" podCreationTimestamp="2026-03-19 09:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:55.074863935 +0000 UTC m=+30.749293676" watchObservedRunningTime="2026-03-19 09:18:55.075715749 +0000 UTC m=+30.750145450" Mar 19 09:18:55.116086 master-0 kubenswrapper[7385]: I0319 09:18:55.115655 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6f6b54748-s5cpx"] Mar 19 09:18:55.117744 master-0 kubenswrapper[7385]: I0319 09:18:55.116817 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.123387 master-0 kubenswrapper[7385]: I0319 09:18:55.121865 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 09:18:55.123387 master-0 kubenswrapper[7385]: I0319 09:18:55.123164 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 09:18:55.123773 master-0 kubenswrapper[7385]: I0319 09:18:55.123681 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 09:18:55.123869 master-0 kubenswrapper[7385]: I0319 09:18:55.123850 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 09:18:55.125406 master-0 kubenswrapper[7385]: I0319 09:18:55.124655 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 09:18:55.125406 master-0 kubenswrapper[7385]: I0319 09:18:55.124869 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 09:18:55.125406 master-0 kubenswrapper[7385]: I0319 09:18:55.125014 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 09:18:55.125406 master-0 kubenswrapper[7385]: I0319 09:18:55.125174 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 09:18:55.127259 master-0 kubenswrapper[7385]: I0319 09:18:55.127192 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-587b98fbb9-l7x24"] Mar 19 09:18:55.131125 master-0 kubenswrapper[7385]: I0319 09:18:55.129006 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 09:18:55.135946 master-0 kubenswrapper[7385]: I0319 09:18:55.132886 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 09:18:55.144812 master-0 kubenswrapper[7385]: I0319 09:18:55.144753 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-587b98fbb9-l7x24"] Mar 19 09:18:55.145036 master-0 kubenswrapper[7385]: I0319 09:18:55.145023 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6f6b54748-s5cpx"] Mar 19 09:18:55.188294 master-0 kubenswrapper[7385]: I0319 09:18:55.185077 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" podStartSLOduration=2.185060642 podStartE2EDuration="2.185060642s" podCreationTimestamp="2026-03-19 09:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:55.184028413 +0000 UTC m=+30.858458124" watchObservedRunningTime="2026-03-19 09:18:55.185060642 +0000 UTC m=+30.859490343" Mar 19 09:18:55.188294 master-0 kubenswrapper[7385]: I0319 09:18:55.185844 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.185837554 podStartE2EDuration="2.185837554s" podCreationTimestamp="2026-03-19 09:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:55.161341981 +0000 UTC m=+30.835771692" watchObservedRunningTime="2026-03-19 09:18:55.185837554 +0000 UTC m=+30.860267265" Mar 19 09:18:55.256393 master-0 kubenswrapper[7385]: I0319 09:18:55.256047 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.256393 master-0 kubenswrapper[7385]: I0319 09:18:55.256388 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svz6j\" (UniqueName: \"kubernetes.io/projected/1669b77c-4bef-42d5-ad0b-63c12a6677b2-kube-api-access-svz6j\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.256450 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.256505 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-client\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.256826 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.256853 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit-dir\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.256892 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.256962 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-node-pullsecrets\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.256984 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-serving-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.257004 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-image-import-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.257034 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-trusted-ca-bundle\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.257062 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-encryption-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.257103 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: I0319 09:18:55.257117 7385 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7-audit\") on node \"master-0\" DevicePath \"\"" Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: E0319 09:18:55.257253 7385 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 19 09:18:55.257586 master-0 kubenswrapper[7385]: E0319 09:18:55.257300 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs podName:d58c6b38-ef11-465c-9fee-b83b84ce4669 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:57.257283654 +0000 UTC m=+32.931713355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-rgzxb" (UID: "d58c6b38-ef11-465c-9fee-b83b84ce4669") : secret "catalogserver-cert" not found Mar 19 09:18:55.358213 master-0 kubenswrapper[7385]: I0319 09:18:55.358167 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359468 master-0 kubenswrapper[7385]: I0319 09:18:55.359434 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359595 master-0 kubenswrapper[7385]: I0319 09:18:55.358269 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-client\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359692 master-0 kubenswrapper[7385]: I0319 09:18:55.359639 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359692 master-0 kubenswrapper[7385]: I0319 09:18:55.359670 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit-dir\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359802 master-0 kubenswrapper[7385]: E0319 09:18:55.359777 7385 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 19 09:18:55.359802 master-0 kubenswrapper[7385]: I0319 09:18:55.359793 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit-dir\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359878 master-0 kubenswrapper[7385]: E0319 09:18:55.359844 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert podName:1669b77c-4bef-42d5-ad0b-63c12a6677b2 nodeName:}" failed. No retries permitted until 2026-03-19 09:18:55.859826525 +0000 UTC m=+31.534256336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert") pod "apiserver-6f6b54748-s5cpx" (UID: "1669b77c-4bef-42d5-ad0b-63c12a6677b2") : secret "serving-cert" not found Mar 19 09:18:55.359916 master-0 kubenswrapper[7385]: I0319 09:18:55.359873 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-node-pullsecrets\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359951 master-0 kubenswrapper[7385]: I0319 09:18:55.359912 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-serving-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.359951 master-0 kubenswrapper[7385]: I0319 09:18:55.359941 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-image-import-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.360053 master-0 kubenswrapper[7385]: I0319 09:18:55.360020 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-trusted-ca-bundle\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.360102 master-0 kubenswrapper[7385]: I0319 09:18:55.360066 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-encryption-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.360141 master-0 kubenswrapper[7385]: I0319 09:18:55.360109 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.360292 master-0 kubenswrapper[7385]: I0319 09:18:55.360248 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-node-pullsecrets\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.360826 master-0 kubenswrapper[7385]: I0319 09:18:55.360796 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-serving-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.360906 master-0 kubenswrapper[7385]: I0319 09:18:55.360849 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.360906 master-0 kubenswrapper[7385]: I0319 09:18:55.360883 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svz6j\" (UniqueName: \"kubernetes.io/projected/1669b77c-4bef-42d5-ad0b-63c12a6677b2-kube-api-access-svz6j\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.361243 master-0 kubenswrapper[7385]: I0319 09:18:55.361215 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-trusted-ca-bundle\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.361400 master-0 kubenswrapper[7385]: I0319 09:18:55.361361 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-image-import-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.364976 master-0 kubenswrapper[7385]: I0319 09:18:55.364939 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-client\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.365679 master-0 kubenswrapper[7385]: I0319 09:18:55.365635 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-encryption-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.383553 master-0 kubenswrapper[7385]: I0319 09:18:55.383484 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svz6j\" (UniqueName: \"kubernetes.io/projected/1669b77c-4bef-42d5-ad0b-63c12a6677b2-kube-api-access-svz6j\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.867199 master-0 kubenswrapper[7385]: I0319 09:18:55.867149 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:55.870650 master-0 kubenswrapper[7385]: I0319 09:18:55.870606 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:56.047921 master-0 kubenswrapper[7385]: I0319 09:18:56.047866 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:18:56.055842 master-0 kubenswrapper[7385]: I0319 09:18:56.055496 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:18:56.232830 master-0 kubenswrapper[7385]: I0319 09:18:56.232710 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6f6b54748-s5cpx"] Mar 19 09:18:56.539627 master-0 kubenswrapper[7385]: I0319 09:18:56.538201 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7" path="/var/lib/kubelet/pods/1cdf409d-a8d8-42fc-a0d0-eab6d8f528d7/volumes" Mar 19 09:18:56.882282 master-0 kubenswrapper[7385]: I0319 09:18:56.881337 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:56.882282 master-0 kubenswrapper[7385]: I0319 09:18:56.881506 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:18:56.882282 master-0 kubenswrapper[7385]: E0319 09:18:56.881629 7385 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:18:56.882282 master-0 kubenswrapper[7385]: E0319 09:18:56.881718 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:18:56.882282 master-0 kubenswrapper[7385]: E0319 09:18:56.881734 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:12.881707767 +0000 UTC m=+48.556137498 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : secret "serving-cert" not found Mar 19 09:18:56.882282 master-0 kubenswrapper[7385]: E0319 09:18:56.881951 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:12.881895863 +0000 UTC m=+48.556325574 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : configmap "client-ca" not found Mar 19 09:18:57.066348 master-0 kubenswrapper[7385]: I0319 09:18:57.066309 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" event={"ID":"1669b77c-4bef-42d5-ad0b-63c12a6677b2","Type":"ContainerStarted","Data":"319cb3ca2c37415dc41e1160ebdc6c8cfc6a2108542dd10b877b244ac8b9e929"} Mar 19 09:18:57.289620 master-0 kubenswrapper[7385]: I0319 09:18:57.289499 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:57.294257 master-0 kubenswrapper[7385]: I0319 09:18:57.293617 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:57.381011 master-0 kubenswrapper[7385]: I0319 09:18:57.380973 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:57.550879 master-0 kubenswrapper[7385]: I0319 09:18:57.550727 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb"] Mar 19 09:18:58.071385 master-0 kubenswrapper[7385]: I0319 09:18:58.071313 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" event={"ID":"d58c6b38-ef11-465c-9fee-b83b84ce4669","Type":"ContainerStarted","Data":"742f2b9c536e8374c80963c76d1696cff2ac061aef9be3d98e75e3dbbdd21557"} Mar 19 09:18:58.071385 master-0 kubenswrapper[7385]: I0319 09:18:58.071353 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" event={"ID":"d58c6b38-ef11-465c-9fee-b83b84ce4669","Type":"ContainerStarted","Data":"72f5421985c5109d770268e1f0a9a31b56de99fc589e7d8c0cffb145c413e3b4"} Mar 19 09:18:58.071385 master-0 kubenswrapper[7385]: I0319 09:18:58.071363 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" event={"ID":"d58c6b38-ef11-465c-9fee-b83b84ce4669","Type":"ContainerStarted","Data":"d1ec5df20bed29547ffb1f52c2c4287cab5554fd187df0c227bb31c435fc62a0"} Mar 19 09:18:58.072722 master-0 kubenswrapper[7385]: I0319 09:18:58.071583 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:18:59.571329 master-0 kubenswrapper[7385]: I0319 09:18:59.570890 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" podStartSLOduration=6.570853014 podStartE2EDuration="6.570853014s" podCreationTimestamp="2026-03-19 09:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:18:58.086120641 +0000 UTC m=+33.760550372" watchObservedRunningTime="2026-03-19 09:18:59.570853014 +0000 UTC m=+35.245282715" Mar 19 09:18:59.573304 master-0 kubenswrapper[7385]: I0319 09:18:59.573270 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 19 09:18:59.573928 master-0 kubenswrapper[7385]: I0319 09:18:59.573894 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.575666 master-0 kubenswrapper[7385]: I0319 09:18:59.575623 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 19 09:18:59.584294 master-0 kubenswrapper[7385]: I0319 09:18:59.584210 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 19 09:18:59.674620 master-0 kubenswrapper[7385]: I0319 09:18:59.674572 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-var-lock\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.674827 master-0 kubenswrapper[7385]: I0319 09:18:59.674652 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9969717-8350-416e-8711-877cdf557d81-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.674827 master-0 kubenswrapper[7385]: I0319 09:18:59.674719 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.776963 master-0 kubenswrapper[7385]: I0319 09:18:59.776921 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-var-lock\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.777206 master-0 kubenswrapper[7385]: I0319 09:18:59.776999 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9969717-8350-416e-8711-877cdf557d81-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.777206 master-0 kubenswrapper[7385]: I0319 09:18:59.777090 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-var-lock\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.777295 master-0 kubenswrapper[7385]: I0319 09:18:59.777245 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.777411 master-0 kubenswrapper[7385]: I0319 09:18:59.777376 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.796745 master-0 kubenswrapper[7385]: I0319 09:18:59.796713 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9969717-8350-416e-8711-877cdf557d81-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.901813 master-0 kubenswrapper[7385]: I0319 09:18:59.901752 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980246 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980307 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980339 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980375 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980402 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980429 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980502 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980567 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980598 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980629 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980653 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980688 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980719 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980751 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980778 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: I0319 09:18:59.980802 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.981537 7385 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: secret "mco-proxy-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.981637 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls podName:c222998f-6211-4466-8ad7-5d9fcfb10789 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.981615332 +0000 UTC m=+67.656045113 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls") pod "machine-config-operator-84d549f6d5-4wv72" (UID: "c222998f-6211-4466-8ad7-5d9fcfb10789") : secret "mco-proxy-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982053 7385 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982086 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls podName:676f4062-ea34-48d0-80d7-3cd3d9da341e nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.982075914 +0000 UTC m=+67.656505705 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-wptdb" (UID: "676f4062-ea34-48d0-80d7-3cd3d9da341e") : secret "cluster-monitoring-operator-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982466 7385 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982503 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls podName:d6cd2eac-6412-4f38-8272-743c67b218a3 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.982492776 +0000 UTC m=+67.656922577 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-nc9rw" (UID: "d6cd2eac-6412-4f38-8272-743c67b218a3") : secret "image-registry-operator-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982603 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982634 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.98262303 +0000 UTC m=+67.657052841 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-operator-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982684 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.982711 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert podName:211d123b-829c-49dd-b119-e172cab607cf nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.982702732 +0000 UTC m=+67.657132553 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert") pod "catalog-operator-68f85b4d6c-tlmxr" (UID: "211d123b-829c-49dd-b119-e172cab607cf") : secret "catalog-operator-serving-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.983682 7385 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.983717 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert podName:3a07456d-2e8e-4e80-a777-d0903ad21f07 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.9837056 +0000 UTC m=+67.658135391 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert") pod "cluster-baremetal-operator-6f69995874-sw7cc" (UID: "3a07456d-2e8e-4e80-a777-d0903ad21f07") : secret "cluster-baremetal-webhook-server-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984614 7385 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984820 7385 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984832 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs podName:3816f149-ddce-41c8-a540-fe866ee71c5e nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.984642797 +0000 UTC m=+67.659072588 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-q4rkm" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e") : secret "multus-admission-controller-secret" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984854 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics podName:58fbf09a-3a26-45ab-8496-11d05c27e9cf nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.984843823 +0000 UTC m=+67.659273524 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-stct6" (UID: "58fbf09a-3a26-45ab-8496-11d05c27e9cf") : secret "marketplace-operator-metrics" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984894 7385 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984896 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984919 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert podName:e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.984913825 +0000 UTC m=+67.659343526 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-52j2b" (UID: "e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc") : secret "package-server-manager-serving-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984931 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs podName:bff5aeea-f859-4e38-bf1c-9e730025c212 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.984925305 +0000 UTC m=+67.659355006 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs") pod "network-metrics-daemon-lflg7" (UID: "bff5aeea-f859-4e38-bf1c-9e730025c212") : secret "metrics-daemon-secret" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984961 7385 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984966 7385 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.984987 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert podName:e25a16f3-dfe0-49c5-a31d-e310d369f406 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.984978006 +0000 UTC m=+67.659407807 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert") pod "olm-operator-5c9796789-fts6w" (UID: "e25a16f3-dfe0-49c5-a31d-e310d369f406") : secret "olm-operator-serving-cert" not found Mar 19 09:18:59.986566 master-0 kubenswrapper[7385]: E0319 09:18:59.985003 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls podName:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a nodeName:}" failed. No retries permitted until 2026-03-19 09:19:31.984995367 +0000 UTC m=+67.659425198 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls") pod "ingress-operator-66b84d69b-vfnhd" (UID: "8bdeb4f3-99f7-44ef-beac-53c3cc073c5a") : secret "metrics-tls" not found Mar 19 09:18:59.988183 master-0 kubenswrapper[7385]: I0319 09:18:59.986729 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"cluster-version-operator-56d8475767-vmv8d\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:18:59.988183 master-0 kubenswrapper[7385]: I0319 09:18:59.987246 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:18:59.988183 master-0 kubenswrapper[7385]: I0319 09:18:59.987781 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:18:59.988579 master-0 kubenswrapper[7385]: I0319 09:18:59.988520 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:19:00.138206 master-0 kubenswrapper[7385]: I0319 09:19:00.137905 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 19 09:19:00.146617 master-0 kubenswrapper[7385]: W0319 09:19:00.146514 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb9969717_8350_416e_8711_877cdf557d81.slice/crio-98b4484c29bf71462f8aa83a2438a018a65a72efc3ab1ad01ecc3b27224d1c48 WatchSource:0}: Error finding container 98b4484c29bf71462f8aa83a2438a018a65a72efc3ab1ad01ecc3b27224d1c48: Status 404 returned error can't find the container with id 98b4484c29bf71462f8aa83a2438a018a65a72efc3ab1ad01ecc3b27224d1c48 Mar 19 09:19:00.266728 master-0 kubenswrapper[7385]: I0319 09:19:00.266681 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:19:00.267414 master-0 kubenswrapper[7385]: I0319 09:19:00.267324 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:19:00.274488 master-0 kubenswrapper[7385]: I0319 09:19:00.274102 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:19:00.455577 master-0 kubenswrapper[7385]: I0319 09:19:00.455243 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-k89rz"] Mar 19 09:19:00.465939 master-0 kubenswrapper[7385]: W0319 09:19:00.465897 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45523224_f530_4354_90de_7fd65a1a3911.slice/crio-79e902522cf9e089c0a0493aeac487bed34c920c85cbed922e6fdff4d7dc7fa4 WatchSource:0}: Error finding container 79e902522cf9e089c0a0493aeac487bed34c920c85cbed922e6fdff4d7dc7fa4: Status 404 returned error can't find the container with id 79e902522cf9e089c0a0493aeac487bed34c920c85cbed922e6fdff4d7dc7fa4 Mar 19 09:19:00.473673 master-0 kubenswrapper[7385]: I0319 09:19:00.473633 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9"] Mar 19 09:19:00.480535 master-0 kubenswrapper[7385]: W0319 09:19:00.480504 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda57648b5_1a08_49a7_bedb_f7c1e54d92b4.slice/crio-2ba0c50971e9f4b73d6981687bf5599b2b14e3a056e01cd696dec3ae2bc23ec5 WatchSource:0}: Error finding container 2ba0c50971e9f4b73d6981687bf5599b2b14e3a056e01cd696dec3ae2bc23ec5: Status 404 returned error can't find the container with id 2ba0c50971e9f4b73d6981687bf5599b2b14e3a056e01cd696dec3ae2bc23ec5 Mar 19 09:19:01.083758 master-0 kubenswrapper[7385]: I0319 09:19:01.083641 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" event={"ID":"16c631c1-277e-47d2-9377-a0bbd14673d4","Type":"ContainerStarted","Data":"715b309f665a655842c51c00d42f465aa6afd85addb3e0939c3cbfa5da354926"} Mar 19 09:19:01.084797 master-0 kubenswrapper[7385]: I0319 09:19:01.084420 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" event={"ID":"a57648b5-1a08-49a7-bedb-f7c1e54d92b4","Type":"ContainerStarted","Data":"2ba0c50971e9f4b73d6981687bf5599b2b14e3a056e01cd696dec3ae2bc23ec5"} Mar 19 09:19:01.085403 master-0 kubenswrapper[7385]: I0319 09:19:01.085369 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" event={"ID":"45523224-f530-4354-90de-7fd65a1a3911","Type":"ContainerStarted","Data":"79e902522cf9e089c0a0493aeac487bed34c920c85cbed922e6fdff4d7dc7fa4"} Mar 19 09:19:01.086869 master-0 kubenswrapper[7385]: I0319 09:19:01.086840 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"b9969717-8350-416e-8711-877cdf557d81","Type":"ContainerStarted","Data":"e03d771886973476dcc44da1c43c397db09c499968945f5153359a0c06bc98ab"} Mar 19 09:19:01.086956 master-0 kubenswrapper[7385]: I0319 09:19:01.086872 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"b9969717-8350-416e-8711-877cdf557d81","Type":"ContainerStarted","Data":"98b4484c29bf71462f8aa83a2438a018a65a72efc3ab1ad01ecc3b27224d1c48"} Mar 19 09:19:01.088301 master-0 kubenswrapper[7385]: I0319 09:19:01.088260 7385 generic.go:334] "Generic (PLEG): container finished" podID="1669b77c-4bef-42d5-ad0b-63c12a6677b2" containerID="8fdd3e54be9275c8b5b5e2dc371f021c349c7ee0ec07fc61904fbdf75b35b7e2" exitCode=0 Mar 19 09:19:01.088301 master-0 kubenswrapper[7385]: I0319 09:19:01.088290 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" event={"ID":"1669b77c-4bef-42d5-ad0b-63c12a6677b2","Type":"ContainerDied","Data":"8fdd3e54be9275c8b5b5e2dc371f021c349c7ee0ec07fc61904fbdf75b35b7e2"} Mar 19 09:19:01.104739 master-0 kubenswrapper[7385]: I0319 09:19:01.104649 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.104632283 podStartE2EDuration="2.104632283s" podCreationTimestamp="2026-03-19 09:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:01.101663229 +0000 UTC m=+36.776092940" watchObservedRunningTime="2026-03-19 09:19:01.104632283 +0000 UTC m=+36.779061984" Mar 19 09:19:02.096145 master-0 kubenswrapper[7385]: I0319 09:19:02.095865 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" event={"ID":"1669b77c-4bef-42d5-ad0b-63c12a6677b2","Type":"ContainerStarted","Data":"71858629d31fef6b9c6dc20c4f7f613bd15e517bc1eaddb2fca9ae3af33ff6f8"} Mar 19 09:19:02.096145 master-0 kubenswrapper[7385]: I0319 09:19:02.096137 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" event={"ID":"1669b77c-4bef-42d5-ad0b-63c12a6677b2","Type":"ContainerStarted","Data":"94407148934d53204156ea026ef9f4894a2467f7f791e4f9ba3cd0af55f51ced"} Mar 19 09:19:02.119393 master-0 kubenswrapper[7385]: I0319 09:19:02.119324 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" podStartSLOduration=5.324213226 podStartE2EDuration="9.119304561s" podCreationTimestamp="2026-03-19 09:18:53 +0000 UTC" firstStartedPulling="2026-03-19 09:18:56.24251596 +0000 UTC m=+31.916945661" lastFinishedPulling="2026-03-19 09:19:00.037607295 +0000 UTC m=+35.712036996" observedRunningTime="2026-03-19 09:19:02.119014313 +0000 UTC m=+37.793444014" watchObservedRunningTime="2026-03-19 09:19:02.119304561 +0000 UTC m=+37.793734262" Mar 19 09:19:03.837729 master-0 kubenswrapper[7385]: I0319 09:19:03.834889 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:19:04.104696 master-0 kubenswrapper[7385]: I0319 09:19:04.104440 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" event={"ID":"16c631c1-277e-47d2-9377-a0bbd14673d4","Type":"ContainerStarted","Data":"e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0"} Mar 19 09:19:04.108620 master-0 kubenswrapper[7385]: I0319 09:19:04.108595 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" event={"ID":"45523224-f530-4354-90de-7fd65a1a3911","Type":"ContainerStarted","Data":"1216bc5493a9d30e4d5da9a04e7da67f0cdf25f822e1f07c71067bfcecdf5f8b"} Mar 19 09:19:04.551302 master-0 kubenswrapper[7385]: I0319 09:19:04.551059 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9xr8p"] Mar 19 09:19:04.557979 master-0 kubenswrapper[7385]: I0319 09:19:04.557390 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.558673 master-0 kubenswrapper[7385]: I0319 09:19:04.557915 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9xr8p"] Mar 19 09:19:04.559966 master-0 kubenswrapper[7385]: I0319 09:19:04.559776 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 09:19:04.559966 master-0 kubenswrapper[7385]: I0319 09:19:04.559812 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 09:19:04.559966 master-0 kubenswrapper[7385]: I0319 09:19:04.559912 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 09:19:04.560143 master-0 kubenswrapper[7385]: I0319 09:19:04.560082 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 09:19:04.675393 master-0 kubenswrapper[7385]: I0319 09:19:04.674120 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzx9\" (UniqueName: \"kubernetes.io/projected/56365780-b87d-43fc-95f5-8a44166aecf8-kube-api-access-5rzx9\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.675393 master-0 kubenswrapper[7385]: I0319 09:19:04.674167 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.675393 master-0 kubenswrapper[7385]: I0319 09:19:04.674286 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56365780-b87d-43fc-95f5-8a44166aecf8-config-volume\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.779638 master-0 kubenswrapper[7385]: I0319 09:19:04.774865 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56365780-b87d-43fc-95f5-8a44166aecf8-config-volume\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.779638 master-0 kubenswrapper[7385]: I0319 09:19:04.774966 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.779638 master-0 kubenswrapper[7385]: I0319 09:19:04.774984 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rzx9\" (UniqueName: \"kubernetes.io/projected/56365780-b87d-43fc-95f5-8a44166aecf8-kube-api-access-5rzx9\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.779638 master-0 kubenswrapper[7385]: I0319 09:19:04.776322 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56365780-b87d-43fc-95f5-8a44166aecf8-config-volume\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.779638 master-0 kubenswrapper[7385]: E0319 09:19:04.776392 7385 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 19 09:19:04.779638 master-0 kubenswrapper[7385]: E0319 09:19:04.776428 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls podName:56365780-b87d-43fc-95f5-8a44166aecf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:05.276416961 +0000 UTC m=+40.950846662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls") pod "dns-default-9xr8p" (UID: "56365780-b87d-43fc-95f5-8a44166aecf8") : secret "dns-default-metrics-tls" not found Mar 19 09:19:04.805264 master-0 kubenswrapper[7385]: I0319 09:19:04.798173 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rzx9\" (UniqueName: \"kubernetes.io/projected/56365780-b87d-43fc-95f5-8a44166aecf8-kube-api-access-5rzx9\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:04.879631 master-0 kubenswrapper[7385]: I0319 09:19:04.879574 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mf78p"] Mar 19 09:19:04.880269 master-0 kubenswrapper[7385]: I0319 09:19:04.880055 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:04.977474 master-0 kubenswrapper[7385]: I0319 09:19:04.977339 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a591384f-f83e-4f65-b5d0-d519f05edbd9-hosts-file\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:04.977474 master-0 kubenswrapper[7385]: I0319 09:19:04.977418 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbmx9\" (UniqueName: \"kubernetes.io/projected/a591384f-f83e-4f65-b5d0-d519f05edbd9-kube-api-access-vbmx9\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:05.079489 master-0 kubenswrapper[7385]: I0319 09:19:05.079108 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a591384f-f83e-4f65-b5d0-d519f05edbd9-hosts-file\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:05.079489 master-0 kubenswrapper[7385]: I0319 09:19:05.079458 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbmx9\" (UniqueName: \"kubernetes.io/projected/a591384f-f83e-4f65-b5d0-d519f05edbd9-kube-api-access-vbmx9\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:05.079689 master-0 kubenswrapper[7385]: I0319 09:19:05.079259 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a591384f-f83e-4f65-b5d0-d519f05edbd9-hosts-file\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:05.095374 master-0 kubenswrapper[7385]: I0319 09:19:05.095352 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbmx9\" (UniqueName: \"kubernetes.io/projected/a591384f-f83e-4f65-b5d0-d519f05edbd9-kube-api-access-vbmx9\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:05.205585 master-0 kubenswrapper[7385]: I0319 09:19:05.204754 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mf78p" Mar 19 09:19:05.281314 master-0 kubenswrapper[7385]: I0319 09:19:05.281265 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:05.281610 master-0 kubenswrapper[7385]: E0319 09:19:05.281449 7385 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 19 09:19:05.281610 master-0 kubenswrapper[7385]: E0319 09:19:05.281519 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls podName:56365780-b87d-43fc-95f5-8a44166aecf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:06.281503076 +0000 UTC m=+41.955932777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls") pod "dns-default-9xr8p" (UID: "56365780-b87d-43fc-95f5-8a44166aecf8") : secret "dns-default-metrics-tls" not found Mar 19 09:19:06.054230 master-0 kubenswrapper[7385]: I0319 09:19:06.054156 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:19:06.054813 master-0 kubenswrapper[7385]: I0319 09:19:06.054623 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:19:06.076268 master-0 kubenswrapper[7385]: I0319 09:19:06.076237 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:19:06.120667 master-0 kubenswrapper[7385]: I0319 09:19:06.120634 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:19:06.330350 master-0 kubenswrapper[7385]: I0319 09:19:06.330240 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:06.330505 master-0 kubenswrapper[7385]: E0319 09:19:06.330480 7385 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 19 09:19:06.330570 master-0 kubenswrapper[7385]: E0319 09:19:06.330552 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls podName:56365780-b87d-43fc-95f5-8a44166aecf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:08.330524525 +0000 UTC m=+44.004954226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls") pod "dns-default-9xr8p" (UID: "56365780-b87d-43fc-95f5-8a44166aecf8") : secret "dns-default-metrics-tls" not found Mar 19 09:19:07.034374 master-0 kubenswrapper[7385]: I0319 09:19:07.034086 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-5r5sh"] Mar 19 09:19:07.034869 master-0 kubenswrapper[7385]: I0319 09:19:07.034845 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.122623 master-0 kubenswrapper[7385]: I0319 09:19:07.122421 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" event={"ID":"a57648b5-1a08-49a7-bedb-f7c1e54d92b4","Type":"ContainerStarted","Data":"8877b45464c5376d1635f878edec2b26c0ed093e8a5de4899f80eaf0d08390b4"} Mar 19 09:19:07.126026 master-0 kubenswrapper[7385]: I0319 09:19:07.125987 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mf78p" event={"ID":"a591384f-f83e-4f65-b5d0-d519f05edbd9","Type":"ContainerStarted","Data":"a870a11369207d668689a2c851cab071545069df1d0b270f69aa17ea6033ba1b"} Mar 19 09:19:07.126096 master-0 kubenswrapper[7385]: I0319 09:19:07.126033 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mf78p" event={"ID":"a591384f-f83e-4f65-b5d0-d519f05edbd9","Type":"ContainerStarted","Data":"408fc587f3c1d995e472d57ef08e1448783433be2d773a5e80c2f22fddf79bea"} Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137023 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-tuned\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137118 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npg9k\" (UniqueName: \"kubernetes.io/projected/dde1a2d9-a43e-4b26-82d7-e0f83577468f-kube-api-access-npg9k\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137148 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysconfig\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137172 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-conf\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137237 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-modprobe-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137271 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-sys\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137318 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-host\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137347 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-run\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137368 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-tmp\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137396 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-lib-modules\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137427 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137460 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-var-lib-kubelet\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137490 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-systemd\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.138474 master-0 kubenswrapper[7385]: I0319 09:19:07.137512 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-kubernetes\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.144944 master-0 kubenswrapper[7385]: I0319 09:19:07.142562 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" event={"ID":"45523224-f530-4354-90de-7fd65a1a3911","Type":"ContainerStarted","Data":"2d884373f1861114610b41f58c5b38b9fede59bd0996c9119ccf37eb5b72a4ac"} Mar 19 09:19:07.241268 master-0 kubenswrapper[7385]: I0319 09:19:07.241003 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-var-lib-kubelet\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241268 master-0 kubenswrapper[7385]: I0319 09:19:07.241273 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-systemd\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241552 master-0 kubenswrapper[7385]: I0319 09:19:07.241304 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-kubernetes\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241552 master-0 kubenswrapper[7385]: I0319 09:19:07.241348 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-tuned\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241552 master-0 kubenswrapper[7385]: I0319 09:19:07.241409 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npg9k\" (UniqueName: \"kubernetes.io/projected/dde1a2d9-a43e-4b26-82d7-e0f83577468f-kube-api-access-npg9k\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241552 master-0 kubenswrapper[7385]: I0319 09:19:07.241438 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysconfig\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241552 master-0 kubenswrapper[7385]: I0319 09:19:07.241462 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-conf\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241740 master-0 kubenswrapper[7385]: I0319 09:19:07.241571 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-modprobe-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241740 master-0 kubenswrapper[7385]: I0319 09:19:07.241594 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-sys\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241740 master-0 kubenswrapper[7385]: I0319 09:19:07.241650 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-host\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241740 master-0 kubenswrapper[7385]: I0319 09:19:07.241664 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-run\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241740 master-0 kubenswrapper[7385]: I0319 09:19:07.241678 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-tmp\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241740 master-0 kubenswrapper[7385]: I0319 09:19:07.241700 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-lib-modules\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241740 master-0 kubenswrapper[7385]: I0319 09:19:07.241723 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.241984 master-0 kubenswrapper[7385]: I0319 09:19:07.241809 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.245573 master-0 kubenswrapper[7385]: I0319 09:19:07.242359 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-var-lib-kubelet\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.245573 master-0 kubenswrapper[7385]: I0319 09:19:07.242610 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-systemd\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.245573 master-0 kubenswrapper[7385]: I0319 09:19:07.242672 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-kubernetes\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.246076 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysconfig\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.246376 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-conf\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.248124 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-modprobe-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.248170 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-sys\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.248215 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-run\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.248486 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-host\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.248831 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-lib-modules\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.249580 master-0 kubenswrapper[7385]: I0319 09:19:07.249420 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-tuned\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.252602 master-0 kubenswrapper[7385]: I0319 09:19:07.250712 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-tmp\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.261943 master-0 kubenswrapper[7385]: I0319 09:19:07.261899 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npg9k\" (UniqueName: \"kubernetes.io/projected/dde1a2d9-a43e-4b26-82d7-e0f83577468f-kube-api-access-npg9k\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.366947 master-0 kubenswrapper[7385]: I0319 09:19:07.366903 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:19:07.381764 master-0 kubenswrapper[7385]: W0319 09:19:07.381715 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde1a2d9_a43e_4b26_82d7_e0f83577468f.slice/crio-e85571cf91c66e7c4c3e43248c7d78f871032f052d0689c7f35a5530b263c03b WatchSource:0}: Error finding container e85571cf91c66e7c4c3e43248c7d78f871032f052d0689c7f35a5530b263c03b: Status 404 returned error can't find the container with id e85571cf91c66e7c4c3e43248c7d78f871032f052d0689c7f35a5530b263c03b Mar 19 09:19:07.384163 master-0 kubenswrapper[7385]: I0319 09:19:07.384112 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:19:07.404933 master-0 kubenswrapper[7385]: I0319 09:19:07.404860 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mf78p" podStartSLOduration=3.404838669 podStartE2EDuration="3.404838669s" podCreationTimestamp="2026-03-19 09:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:07.217488971 +0000 UTC m=+42.891918672" watchObservedRunningTime="2026-03-19 09:19:07.404838669 +0000 UTC m=+43.079268380" Mar 19 09:19:08.147867 master-0 kubenswrapper[7385]: I0319 09:19:08.147495 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" event={"ID":"dde1a2d9-a43e-4b26-82d7-e0f83577468f","Type":"ContainerStarted","Data":"31b5f8363f829bdec926462c23186f49bf18686a88586b4599d3cd0cc1291fdb"} Mar 19 09:19:08.148589 master-0 kubenswrapper[7385]: I0319 09:19:08.147883 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" event={"ID":"dde1a2d9-a43e-4b26-82d7-e0f83577468f","Type":"ContainerStarted","Data":"e85571cf91c66e7c4c3e43248c7d78f871032f052d0689c7f35a5530b263c03b"} Mar 19 09:19:08.356252 master-0 kubenswrapper[7385]: I0319 09:19:08.356198 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:08.356452 master-0 kubenswrapper[7385]: E0319 09:19:08.356368 7385 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 19 09:19:08.356533 master-0 kubenswrapper[7385]: E0319 09:19:08.356503 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls podName:56365780-b87d-43fc-95f5-8a44166aecf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:12.356479105 +0000 UTC m=+48.030908806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls") pod "dns-default-9xr8p" (UID: "56365780-b87d-43fc-95f5-8a44166aecf8") : secret "dns-default-metrics-tls" not found Mar 19 09:19:09.368705 master-0 kubenswrapper[7385]: I0319 09:19:09.368636 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:19:09.369430 master-0 kubenswrapper[7385]: E0319 09:19:09.368824 7385 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 19 09:19:09.369430 master-0 kubenswrapper[7385]: I0319 09:19:09.368848 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") pod \"route-controller-manager-69d4668cd7-dbm2r\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:19:09.369430 master-0 kubenswrapper[7385]: E0319 09:19:09.368891 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:41.368873417 +0000 UTC m=+77.043303138 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : secret "serving-cert" not found Mar 19 09:19:09.369430 master-0 kubenswrapper[7385]: E0319 09:19:09.368910 7385 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:19:09.369430 master-0 kubenswrapper[7385]: E0319 09:19:09.368980 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca podName:56aed905-9823-4165-9bcd-c4d7ce7bed90 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:41.368942 +0000 UTC m=+77.043371701 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca") pod "route-controller-manager-69d4668cd7-dbm2r" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90") : configmap "client-ca" not found Mar 19 09:19:10.687474 master-0 kubenswrapper[7385]: I0319 09:19:10.687392 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" podStartSLOduration=3.687368538 podStartE2EDuration="3.687368538s" podCreationTimestamp="2026-03-19 09:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:08.160524662 +0000 UTC m=+43.834954373" watchObservedRunningTime="2026-03-19 09:19:10.687368538 +0000 UTC m=+46.361798279" Mar 19 09:19:10.689676 master-0 kubenswrapper[7385]: I0319 09:19:10.689641 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-775788bf78-tgdnw"] Mar 19 09:19:10.690534 master-0 kubenswrapper[7385]: I0319 09:19:10.690507 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.693315 master-0 kubenswrapper[7385]: I0319 09:19:10.693281 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 09:19:10.693405 master-0 kubenswrapper[7385]: I0319 09:19:10.693357 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 09:19:10.693522 master-0 kubenswrapper[7385]: I0319 09:19:10.693486 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 09:19:10.694978 master-0 kubenswrapper[7385]: I0319 09:19:10.694921 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 09:19:10.695044 master-0 kubenswrapper[7385]: I0319 09:19:10.694934 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 09:19:10.696837 master-0 kubenswrapper[7385]: I0319 09:19:10.696796 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 09:19:10.698460 master-0 kubenswrapper[7385]: I0319 09:19:10.698429 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 09:19:10.703307 master-0 kubenswrapper[7385]: I0319 09:19:10.703241 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-775788bf78-tgdnw"] Mar 19 09:19:10.703650 master-0 kubenswrapper[7385]: I0319 09:19:10.703609 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 09:19:10.784838 master-0 kubenswrapper[7385]: I0319 09:19:10.784801 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.784838 master-0 kubenswrapper[7385]: I0319 09:19:10.784852 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.785062 master-0 kubenswrapper[7385]: I0319 09:19:10.784987 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-dir\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.785062 master-0 kubenswrapper[7385]: I0319 09:19:10.785035 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6t27\" (UniqueName: \"kubernetes.io/projected/561b7381-8439-4ccc-ac50-d7a50aeb0c55-kube-api-access-t6t27\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.785130 master-0 kubenswrapper[7385]: I0319 09:19:10.785097 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.785186 master-0 kubenswrapper[7385]: I0319 09:19:10.785157 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.785314 master-0 kubenswrapper[7385]: I0319 09:19:10.785270 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.785486 master-0 kubenswrapper[7385]: I0319 09:19:10.785463 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.886171 master-0 kubenswrapper[7385]: I0319 09:19:10.886115 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.886357 master-0 kubenswrapper[7385]: I0319 09:19:10.886333 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.886418 master-0 kubenswrapper[7385]: I0319 09:19:10.886393 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.886576 master-0 kubenswrapper[7385]: I0319 09:19:10.886559 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.886620 master-0 kubenswrapper[7385]: I0319 09:19:10.886585 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.887065 master-0 kubenswrapper[7385]: I0319 09:19:10.887020 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-dir\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.887109 master-0 kubenswrapper[7385]: I0319 09:19:10.887070 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6t27\" (UniqueName: \"kubernetes.io/projected/561b7381-8439-4ccc-ac50-d7a50aeb0c55-kube-api-access-t6t27\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.887166 master-0 kubenswrapper[7385]: I0319 09:19:10.887131 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-dir\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.887200 master-0 kubenswrapper[7385]: I0319 09:19:10.887189 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.887465 master-0 kubenswrapper[7385]: I0319 09:19:10.887429 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.887518 master-0 kubenswrapper[7385]: I0319 09:19:10.887445 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.887620 master-0 kubenswrapper[7385]: I0319 09:19:10.887591 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.890461 master-0 kubenswrapper[7385]: I0319 09:19:10.890411 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.890759 master-0 kubenswrapper[7385]: I0319 09:19:10.890701 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.890919 master-0 kubenswrapper[7385]: I0319 09:19:10.890889 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:10.900918 master-0 kubenswrapper[7385]: I0319 09:19:10.900880 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6t27\" (UniqueName: \"kubernetes.io/projected/561b7381-8439-4ccc-ac50-d7a50aeb0c55-kube-api-access-t6t27\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:11.031040 master-0 kubenswrapper[7385]: I0319 09:19:11.030912 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:11.665262 master-0 kubenswrapper[7385]: I0319 09:19:11.665191 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-775788bf78-tgdnw"] Mar 19 09:19:11.672070 master-0 kubenswrapper[7385]: I0319 09:19:11.672013 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 09:19:11.672311 master-0 kubenswrapper[7385]: I0319 09:19:11.672270 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="9120887f-15a9-45e1-846d-dd85a5949ebb" containerName="installer" containerID="cri-o://c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d" gracePeriod=30 Mar 19 09:19:11.675743 master-0 kubenswrapper[7385]: W0319 09:19:11.675678 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod561b7381_8439_4ccc_ac50_d7a50aeb0c55.slice/crio-204782aa21e2bf31865a1381946590d0ce8a970fb26f83eebd02fa7b0497c2c5 WatchSource:0}: Error finding container 204782aa21e2bf31865a1381946590d0ce8a970fb26f83eebd02fa7b0497c2c5: Status 404 returned error can't find the container with id 204782aa21e2bf31865a1381946590d0ce8a970fb26f83eebd02fa7b0497c2c5 Mar 19 09:19:12.170798 master-0 kubenswrapper[7385]: I0319 09:19:12.170737 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" event={"ID":"561b7381-8439-4ccc-ac50-d7a50aeb0c55","Type":"ContainerStarted","Data":"204782aa21e2bf31865a1381946590d0ce8a970fb26f83eebd02fa7b0497c2c5"} Mar 19 09:19:12.407561 master-0 kubenswrapper[7385]: I0319 09:19:12.407304 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:12.410059 master-0 kubenswrapper[7385]: I0319 09:19:12.410026 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:12.683954 master-0 kubenswrapper[7385]: I0319 09:19:12.683912 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:12.878385 master-0 kubenswrapper[7385]: I0319 09:19:12.878284 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:19:12.922294 master-0 kubenswrapper[7385]: I0319 09:19:12.915995 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:19:12.922294 master-0 kubenswrapper[7385]: I0319 09:19:12.916041 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:19:12.922294 master-0 kubenswrapper[7385]: E0319 09:19:12.916465 7385 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 19 09:19:12.922294 master-0 kubenswrapper[7385]: E0319 09:19:12.916522 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca podName:a3aa997e-848b-4c05-8fad-cb9b3d832a59 nodeName:}" failed. No retries permitted until 2026-03-19 09:19:44.916507894 +0000 UTC m=+80.590937585 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca") pod "controller-manager-658c4c5ff9-msbxc" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59") : configmap "client-ca" not found Mar 19 09:19:12.922294 master-0 kubenswrapper[7385]: I0319 09:19:12.919222 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"controller-manager-658c4c5ff9-msbxc\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:19:13.062322 master-0 kubenswrapper[7385]: I0319 09:19:13.061839 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9xr8p"] Mar 19 09:19:13.177757 master-0 kubenswrapper[7385]: I0319 09:19:13.177697 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9xr8p" event={"ID":"56365780-b87d-43fc-95f5-8a44166aecf8","Type":"ContainerStarted","Data":"cee650f463641d78c2e399a131e5c5cb6dd2c4bd205c9ebc6a4a1814777051c4"} Mar 19 09:19:13.943952 master-0 kubenswrapper[7385]: I0319 09:19:13.943886 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 09:19:13.945466 master-0 kubenswrapper[7385]: I0319 09:19:13.945430 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:13.951773 master-0 kubenswrapper[7385]: I0319 09:19:13.951724 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 09:19:14.030487 master-0 kubenswrapper[7385]: I0319 09:19:14.030222 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-var-lock\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.030487 master-0 kubenswrapper[7385]: I0319 09:19:14.030306 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.030487 master-0 kubenswrapper[7385]: I0319 09:19:14.030347 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.136566 master-0 kubenswrapper[7385]: I0319 09:19:14.132732 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-var-lock\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.136566 master-0 kubenswrapper[7385]: I0319 09:19:14.132788 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.136566 master-0 kubenswrapper[7385]: I0319 09:19:14.132838 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.136566 master-0 kubenswrapper[7385]: I0319 09:19:14.133302 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-var-lock\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.136566 master-0 kubenswrapper[7385]: I0319 09:19:14.133342 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.173311 master-0 kubenswrapper[7385]: I0319 09:19:14.173268 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.267673 master-0 kubenswrapper[7385]: I0319 09:19:14.267633 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:14.641038 master-0 kubenswrapper[7385]: I0319 09:19:14.639756 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 09:19:14.649175 master-0 kubenswrapper[7385]: W0319 09:19:14.649009 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf130dfec_5c6d_44d4_8a13_112d3ccd4511.slice/crio-673bf35b216630536f4b4880dc5243bda1d02ee1820a4dfe7ae69ae40082b611 WatchSource:0}: Error finding container 673bf35b216630536f4b4880dc5243bda1d02ee1820a4dfe7ae69ae40082b611: Status 404 returned error can't find the container with id 673bf35b216630536f4b4880dc5243bda1d02ee1820a4dfe7ae69ae40082b611 Mar 19 09:19:15.197052 master-0 kubenswrapper[7385]: I0319 09:19:15.196994 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f130dfec-5c6d-44d4-8a13-112d3ccd4511","Type":"ContainerStarted","Data":"d406fcbf9ee8fa471b4015bf63e32acea0c92f160254c0b2773d8ca3af979b03"} Mar 19 09:19:15.197052 master-0 kubenswrapper[7385]: I0319 09:19:15.197048 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f130dfec-5c6d-44d4-8a13-112d3ccd4511","Type":"ContainerStarted","Data":"673bf35b216630536f4b4880dc5243bda1d02ee1820a4dfe7ae69ae40082b611"} Mar 19 09:19:15.203704 master-0 kubenswrapper[7385]: I0319 09:19:15.199434 7385 generic.go:334] "Generic (PLEG): container finished" podID="561b7381-8439-4ccc-ac50-d7a50aeb0c55" containerID="c42177f0a6bfccde75c92bce6a5608676cc3c57606fa245b67568e6ea94f8cb0" exitCode=0 Mar 19 09:19:15.203704 master-0 kubenswrapper[7385]: I0319 09:19:15.199483 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" event={"ID":"561b7381-8439-4ccc-ac50-d7a50aeb0c55","Type":"ContainerDied","Data":"c42177f0a6bfccde75c92bce6a5608676cc3c57606fa245b67568e6ea94f8cb0"} Mar 19 09:19:15.210445 master-0 kubenswrapper[7385]: I0319 09:19:15.210382 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.210362371 podStartE2EDuration="2.210362371s" podCreationTimestamp="2026-03-19 09:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:15.208613982 +0000 UTC m=+50.883043693" watchObservedRunningTime="2026-03-19 09:19:15.210362371 +0000 UTC m=+50.884792092" Mar 19 09:19:15.857921 master-0 kubenswrapper[7385]: I0319 09:19:15.855977 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658c4c5ff9-msbxc"] Mar 19 09:19:15.857921 master-0 kubenswrapper[7385]: E0319 09:19:15.856409 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" podUID="a3aa997e-848b-4c05-8fad-cb9b3d832a59" Mar 19 09:19:15.873085 master-0 kubenswrapper[7385]: I0319 09:19:15.871681 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r"] Mar 19 09:19:15.873085 master-0 kubenswrapper[7385]: E0319 09:19:15.871994 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" podUID="56aed905-9823-4165-9bcd-c4d7ce7bed90" Mar 19 09:19:16.208561 master-0 kubenswrapper[7385]: I0319 09:19:16.207536 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9xr8p" event={"ID":"56365780-b87d-43fc-95f5-8a44166aecf8","Type":"ContainerStarted","Data":"b95ae8a173da8ba874a28a3243c67641c25572b01c840c3cad63415047550f98"} Mar 19 09:19:16.208561 master-0 kubenswrapper[7385]: I0319 09:19:16.207668 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9xr8p" event={"ID":"56365780-b87d-43fc-95f5-8a44166aecf8","Type":"ContainerStarted","Data":"79362f7c56fbe62c7256d8e7a093b40ce843988e024184916a4dc9c08d79e72f"} Mar 19 09:19:16.208561 master-0 kubenswrapper[7385]: I0319 09:19:16.207950 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:16.210837 master-0 kubenswrapper[7385]: I0319 09:19:16.210794 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:19:16.210915 master-0 kubenswrapper[7385]: I0319 09:19:16.210846 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:19:16.211058 master-0 kubenswrapper[7385]: I0319 09:19:16.210797 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" event={"ID":"561b7381-8439-4ccc-ac50-d7a50aeb0c55","Type":"ContainerStarted","Data":"e81a8445d65587d51a98b8e35a9a781bf5c708dbe7c47079795ba6a8389528c2"} Mar 19 09:19:16.219653 master-0 kubenswrapper[7385]: I0319 09:19:16.219617 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:19:16.229891 master-0 kubenswrapper[7385]: I0319 09:19:16.227192 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9xr8p" podStartSLOduration=9.569609535 podStartE2EDuration="12.227174499s" podCreationTimestamp="2026-03-19 09:19:04 +0000 UTC" firstStartedPulling="2026-03-19 09:19:13.090725981 +0000 UTC m=+48.765155672" lastFinishedPulling="2026-03-19 09:19:15.748290935 +0000 UTC m=+51.422720636" observedRunningTime="2026-03-19 09:19:16.22472038 +0000 UTC m=+51.899150091" watchObservedRunningTime="2026-03-19 09:19:16.227174499 +0000 UTC m=+51.901604210" Mar 19 09:19:16.229891 master-0 kubenswrapper[7385]: I0319 09:19:16.227398 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:19:16.254815 master-0 kubenswrapper[7385]: I0319 09:19:16.254163 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" podStartSLOduration=3.715555353 podStartE2EDuration="6.254146032s" podCreationTimestamp="2026-03-19 09:19:10 +0000 UTC" firstStartedPulling="2026-03-19 09:19:11.68627755 +0000 UTC m=+47.360707251" lastFinishedPulling="2026-03-19 09:19:14.224868229 +0000 UTC m=+49.899297930" observedRunningTime="2026-03-19 09:19:16.253045641 +0000 UTC m=+51.927475362" watchObservedRunningTime="2026-03-19 09:19:16.254146032 +0000 UTC m=+51.928575733" Mar 19 09:19:16.366608 master-0 kubenswrapper[7385]: I0319 09:19:16.366560 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc924\" (UniqueName: \"kubernetes.io/projected/56aed905-9823-4165-9bcd-c4d7ce7bed90-kube-api-access-rc924\") pod \"56aed905-9823-4165-9bcd-c4d7ce7bed90\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " Mar 19 09:19:16.366797 master-0 kubenswrapper[7385]: I0319 09:19:16.366624 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvsmb\" (UniqueName: \"kubernetes.io/projected/a3aa997e-848b-4c05-8fad-cb9b3d832a59-kube-api-access-zvsmb\") pod \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " Mar 19 09:19:16.366797 master-0 kubenswrapper[7385]: I0319 09:19:16.366648 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") pod \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " Mar 19 09:19:16.366797 master-0 kubenswrapper[7385]: I0319 09:19:16.366692 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-proxy-ca-bundles\") pod \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " Mar 19 09:19:16.366797 master-0 kubenswrapper[7385]: I0319 09:19:16.366719 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-config\") pod \"56aed905-9823-4165-9bcd-c4d7ce7bed90\" (UID: \"56aed905-9823-4165-9bcd-c4d7ce7bed90\") " Mar 19 09:19:16.368481 master-0 kubenswrapper[7385]: I0319 09:19:16.367103 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-config\") pod \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\" (UID: \"a3aa997e-848b-4c05-8fad-cb9b3d832a59\") " Mar 19 09:19:16.368481 master-0 kubenswrapper[7385]: I0319 09:19:16.367109 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-config" (OuterVolumeSpecName: "config") pod "56aed905-9823-4165-9bcd-c4d7ce7bed90" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:19:16.368481 master-0 kubenswrapper[7385]: I0319 09:19:16.367419 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-config" (OuterVolumeSpecName: "config") pod "a3aa997e-848b-4c05-8fad-cb9b3d832a59" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:19:16.368481 master-0 kubenswrapper[7385]: I0319 09:19:16.367199 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a3aa997e-848b-4c05-8fad-cb9b3d832a59" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:19:16.368481 master-0 kubenswrapper[7385]: I0319 09:19:16.367946 7385 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:16.368481 master-0 kubenswrapper[7385]: I0319 09:19:16.367960 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:16.368481 master-0 kubenswrapper[7385]: I0319 09:19:16.367970 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:16.369821 master-0 kubenswrapper[7385]: I0319 09:19:16.369663 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a3aa997e-848b-4c05-8fad-cb9b3d832a59" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:19:16.370154 master-0 kubenswrapper[7385]: I0319 09:19:16.370026 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3aa997e-848b-4c05-8fad-cb9b3d832a59-kube-api-access-zvsmb" (OuterVolumeSpecName: "kube-api-access-zvsmb") pod "a3aa997e-848b-4c05-8fad-cb9b3d832a59" (UID: "a3aa997e-848b-4c05-8fad-cb9b3d832a59"). InnerVolumeSpecName "kube-api-access-zvsmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:19:16.370154 master-0 kubenswrapper[7385]: I0319 09:19:16.370118 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56aed905-9823-4165-9bcd-c4d7ce7bed90-kube-api-access-rc924" (OuterVolumeSpecName: "kube-api-access-rc924") pod "56aed905-9823-4165-9bcd-c4d7ce7bed90" (UID: "56aed905-9823-4165-9bcd-c4d7ce7bed90"). InnerVolumeSpecName "kube-api-access-rc924". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:19:16.469109 master-0 kubenswrapper[7385]: I0319 09:19:16.468997 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc924\" (UniqueName: \"kubernetes.io/projected/56aed905-9823-4165-9bcd-c4d7ce7bed90-kube-api-access-rc924\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:16.469109 master-0 kubenswrapper[7385]: I0319 09:19:16.469035 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvsmb\" (UniqueName: \"kubernetes.io/projected/a3aa997e-848b-4c05-8fad-cb9b3d832a59-kube-api-access-zvsmb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:16.469109 master-0 kubenswrapper[7385]: I0319 09:19:16.469045 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3aa997e-848b-4c05-8fad-cb9b3d832a59-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:17.214239 master-0 kubenswrapper[7385]: I0319 09:19:17.214199 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658c4c5ff9-msbxc" Mar 19 09:19:17.214726 master-0 kubenswrapper[7385]: I0319 09:19:17.214261 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r" Mar 19 09:19:17.256108 master-0 kubenswrapper[7385]: I0319 09:19:17.256023 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj"] Mar 19 09:19:17.257646 master-0 kubenswrapper[7385]: I0319 09:19:17.257593 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r"] Mar 19 09:19:17.257717 master-0 kubenswrapper[7385]: I0319 09:19:17.257664 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.259156 master-0 kubenswrapper[7385]: I0319 09:19:17.259112 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69d4668cd7-dbm2r"] Mar 19 09:19:17.266589 master-0 kubenswrapper[7385]: I0319 09:19:17.263158 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 09:19:17.266589 master-0 kubenswrapper[7385]: I0319 09:19:17.263387 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 09:19:17.266589 master-0 kubenswrapper[7385]: I0319 09:19:17.263402 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 09:19:17.266589 master-0 kubenswrapper[7385]: I0319 09:19:17.263685 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 09:19:17.266589 master-0 kubenswrapper[7385]: I0319 09:19:17.263871 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 09:19:17.266589 master-0 kubenswrapper[7385]: I0319 09:19:17.266575 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj"] Mar 19 09:19:17.308075 master-0 kubenswrapper[7385]: I0319 09:19:17.303874 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658c4c5ff9-msbxc"] Mar 19 09:19:17.308075 master-0 kubenswrapper[7385]: I0319 09:19:17.306106 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-658c4c5ff9-msbxc"] Mar 19 09:19:17.380197 master-0 kubenswrapper[7385]: I0319 09:19:17.380136 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-config\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.380390 master-0 kubenswrapper[7385]: I0319 09:19:17.380277 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnbq8\" (UniqueName: \"kubernetes.io/projected/24e84d52-ae67-40d0-a2c5-39160b90fa0e-kube-api-access-jnbq8\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.380390 master-0 kubenswrapper[7385]: I0319 09:19:17.380347 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e84d52-ae67-40d0-a2c5-39160b90fa0e-serving-cert\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.380879 master-0 kubenswrapper[7385]: I0319 09:19:17.380562 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-client-ca\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.380879 master-0 kubenswrapper[7385]: I0319 09:19:17.380657 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56aed905-9823-4165-9bcd-c4d7ce7bed90-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:17.380879 master-0 kubenswrapper[7385]: I0319 09:19:17.380673 7385 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56aed905-9823-4165-9bcd-c4d7ce7bed90-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:17.482122 master-0 kubenswrapper[7385]: I0319 09:19:17.481981 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-client-ca\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.482122 master-0 kubenswrapper[7385]: I0319 09:19:17.482069 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-config\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.482122 master-0 kubenswrapper[7385]: I0319 09:19:17.482113 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnbq8\" (UniqueName: \"kubernetes.io/projected/24e84d52-ae67-40d0-a2c5-39160b90fa0e-kube-api-access-jnbq8\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.482560 master-0 kubenswrapper[7385]: I0319 09:19:17.482516 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e84d52-ae67-40d0-a2c5-39160b90fa0e-serving-cert\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.482675 master-0 kubenswrapper[7385]: I0319 09:19:17.482654 7385 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a3aa997e-848b-4c05-8fad-cb9b3d832a59-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:17.483161 master-0 kubenswrapper[7385]: I0319 09:19:17.483122 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-client-ca\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.483253 master-0 kubenswrapper[7385]: I0319 09:19:17.483213 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-config\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.486112 master-0 kubenswrapper[7385]: I0319 09:19:17.486085 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e84d52-ae67-40d0-a2c5-39160b90fa0e-serving-cert\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.499137 master-0 kubenswrapper[7385]: I0319 09:19:17.499099 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnbq8\" (UniqueName: \"kubernetes.io/projected/24e84d52-ae67-40d0-a2c5-39160b90fa0e-kube-api-access-jnbq8\") pod \"route-controller-manager-f897ddc75-l2pbj\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:17.585657 master-0 kubenswrapper[7385]: I0319 09:19:17.585592 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:18.018678 master-0 kubenswrapper[7385]: I0319 09:19:18.014919 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj"] Mar 19 09:19:18.018678 master-0 kubenswrapper[7385]: I0319 09:19:18.016918 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 09:19:18.018678 master-0 kubenswrapper[7385]: I0319 09:19:18.017573 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.018678 master-0 kubenswrapper[7385]: I0319 09:19:18.018111 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 09:19:18.021623 master-0 kubenswrapper[7385]: I0319 09:19:18.021596 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 09:19:18.153810 master-0 kubenswrapper[7385]: I0319 09:19:18.153773 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-z9khh_fe1881fb-c670-442a-a092-c1eee6b7d5e5/authentication-operator/0.log" Mar 19 09:19:18.209742 master-0 kubenswrapper[7385]: I0319 09:19:18.209669 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-var-lock\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.210033 master-0 kubenswrapper[7385]: I0319 09:19:18.209934 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.210092 master-0 kubenswrapper[7385]: I0319 09:19:18.210057 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3ab0802-da8a-475c-a707-09f7838f580b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.219322 master-0 kubenswrapper[7385]: I0319 09:19:18.219272 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" event={"ID":"24e84d52-ae67-40d0-a2c5-39160b90fa0e","Type":"ContainerStarted","Data":"1dd0b82916d571c08bf7c2cfa784425a6e307c8f39fb70ae48f26d80408e5899"} Mar 19 09:19:18.311267 master-0 kubenswrapper[7385]: I0319 09:19:18.311152 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-var-lock\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.311267 master-0 kubenswrapper[7385]: I0319 09:19:18.311258 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.311433 master-0 kubenswrapper[7385]: I0319 09:19:18.311292 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3ab0802-da8a-475c-a707-09f7838f580b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.311493 master-0 kubenswrapper[7385]: I0319 09:19:18.311438 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-var-lock\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.311732 master-0 kubenswrapper[7385]: I0319 09:19:18.311695 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.329511 master-0 kubenswrapper[7385]: I0319 09:19:18.329267 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3ab0802-da8a-475c-a707-09f7838f580b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.350537 master-0 kubenswrapper[7385]: I0319 09:19:18.350497 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-k89rz_45523224-f530-4354-90de-7fd65a1a3911/dns-operator/0.log" Mar 19 09:19:18.360716 master-0 kubenswrapper[7385]: I0319 09:19:18.360672 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:19:18.493968 master-0 kubenswrapper[7385]: I0319 09:19:18.493918 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d"] Mar 19 09:19:18.494210 master-0 kubenswrapper[7385]: I0319 09:19:18.494170 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" podUID="16c631c1-277e-47d2-9377-a0bbd14673d4" containerName="cluster-version-operator" containerID="cri-o://e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0" gracePeriod=130 Mar 19 09:19:18.542763 master-0 kubenswrapper[7385]: I0319 09:19:18.542721 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56aed905-9823-4165-9bcd-c4d7ce7bed90" path="/var/lib/kubelet/pods/56aed905-9823-4165-9bcd-c4d7ce7bed90/volumes" Mar 19 09:19:18.543148 master-0 kubenswrapper[7385]: I0319 09:19:18.543127 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3aa997e-848b-4c05-8fad-cb9b3d832a59" path="/var/lib/kubelet/pods/a3aa997e-848b-4c05-8fad-cb9b3d832a59/volumes" Mar 19 09:19:18.548051 master-0 kubenswrapper[7385]: I0319 09:19:18.548012 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-k89rz_45523224-f530-4354-90de-7fd65a1a3911/kube-rbac-proxy/0.log" Mar 19 09:19:18.748684 master-0 kubenswrapper[7385]: I0319 09:19:18.748630 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-9xr8p_56365780-b87d-43fc-95f5-8a44166aecf8/dns/0.log" Mar 19 09:19:18.750464 master-0 kubenswrapper[7385]: I0319 09:19:18.750430 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:19:18.828182 master-0 kubenswrapper[7385]: I0319 09:19:18.828126 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca\") pod \"16c631c1-277e-47d2-9377-a0bbd14673d4\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " Mar 19 09:19:18.828182 master-0 kubenswrapper[7385]: I0319 09:19:18.828179 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") pod \"16c631c1-277e-47d2-9377-a0bbd14673d4\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " Mar 19 09:19:18.828527 master-0 kubenswrapper[7385]: I0319 09:19:18.828211 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") pod \"16c631c1-277e-47d2-9377-a0bbd14673d4\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " Mar 19 09:19:18.828527 master-0 kubenswrapper[7385]: I0319 09:19:18.828242 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access\") pod \"16c631c1-277e-47d2-9377-a0bbd14673d4\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " Mar 19 09:19:18.828527 master-0 kubenswrapper[7385]: I0319 09:19:18.828280 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") pod \"16c631c1-277e-47d2-9377-a0bbd14673d4\" (UID: \"16c631c1-277e-47d2-9377-a0bbd14673d4\") " Mar 19 09:19:18.828695 master-0 kubenswrapper[7385]: I0319 09:19:18.828531 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "16c631c1-277e-47d2-9377-a0bbd14673d4" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:18.829120 master-0 kubenswrapper[7385]: I0319 09:19:18.829082 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca" (OuterVolumeSpecName: "service-ca") pod "16c631c1-277e-47d2-9377-a0bbd14673d4" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:19:18.830278 master-0 kubenswrapper[7385]: I0319 09:19:18.830222 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "16c631c1-277e-47d2-9377-a0bbd14673d4" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:18.834914 master-0 kubenswrapper[7385]: I0319 09:19:18.834857 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "16c631c1-277e-47d2-9377-a0bbd14673d4" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:19:18.840448 master-0 kubenswrapper[7385]: I0319 09:19:18.840406 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16c631c1-277e-47d2-9377-a0bbd14673d4" (UID: "16c631c1-277e-47d2-9377-a0bbd14673d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:19:18.841998 master-0 kubenswrapper[7385]: I0319 09:19:18.841963 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 09:19:18.848742 master-0 kubenswrapper[7385]: W0319 09:19:18.848687 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode3ab0802_da8a_475c_a707_09f7838f580b.slice/crio-c2904eb335d23e11e23721447bebed6e83898b398c508def8b073f85f1f0f7e4 WatchSource:0}: Error finding container c2904eb335d23e11e23721447bebed6e83898b398c508def8b073f85f1f0f7e4: Status 404 returned error can't find the container with id c2904eb335d23e11e23721447bebed6e83898b398c508def8b073f85f1f0f7e4 Mar 19 09:19:18.930198 master-0 kubenswrapper[7385]: I0319 09:19:18.930093 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16c631c1-277e-47d2-9377-a0bbd14673d4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:18.930198 master-0 kubenswrapper[7385]: I0319 09:19:18.930144 7385 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:18.930198 master-0 kubenswrapper[7385]: I0319 09:19:18.930159 7385 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/16c631c1-277e-47d2-9377-a0bbd14673d4-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:18.930198 master-0 kubenswrapper[7385]: I0319 09:19:18.930172 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16c631c1-277e-47d2-9377-a0bbd14673d4-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:18.930198 master-0 kubenswrapper[7385]: I0319 09:19:18.930186 7385 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/16c631c1-277e-47d2-9377-a0bbd14673d4-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:18.947264 master-0 kubenswrapper[7385]: I0319 09:19:18.947201 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-9xr8p_56365780-b87d-43fc-95f5-8a44166aecf8/kube-rbac-proxy/0.log" Mar 19 09:19:19.146079 master-0 kubenswrapper[7385]: I0319 09:19:19.145898 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-mf78p_a591384f-f83e-4f65-b5d0-d519f05edbd9/dns-node-resolver/0.log" Mar 19 09:19:19.202488 master-0 kubenswrapper[7385]: I0319 09:19:19.202384 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 19 09:19:19.203654 master-0 kubenswrapper[7385]: E0319 09:19:19.202856 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c631c1-277e-47d2-9377-a0bbd14673d4" containerName="cluster-version-operator" Mar 19 09:19:19.203654 master-0 kubenswrapper[7385]: I0319 09:19:19.202873 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c631c1-277e-47d2-9377-a0bbd14673d4" containerName="cluster-version-operator" Mar 19 09:19:19.203654 master-0 kubenswrapper[7385]: I0319 09:19:19.202983 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c631c1-277e-47d2-9377-a0bbd14673d4" containerName="cluster-version-operator" Mar 19 09:19:19.203654 master-0 kubenswrapper[7385]: I0319 09:19:19.203255 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.205437 master-0 kubenswrapper[7385]: I0319 09:19:19.205299 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 09:19:19.214719 master-0 kubenswrapper[7385]: I0319 09:19:19.212422 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 19 09:19:19.229645 master-0 kubenswrapper[7385]: I0319 09:19:19.229358 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e3ab0802-da8a-475c-a707-09f7838f580b","Type":"ContainerStarted","Data":"a1c35003004ca85e3194260594ce7980c9cfead4c46c7a6e5e65ede51128fa87"} Mar 19 09:19:19.229645 master-0 kubenswrapper[7385]: I0319 09:19:19.229414 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e3ab0802-da8a-475c-a707-09f7838f580b","Type":"ContainerStarted","Data":"c2904eb335d23e11e23721447bebed6e83898b398c508def8b073f85f1f0f7e4"} Mar 19 09:19:19.231490 master-0 kubenswrapper[7385]: I0319 09:19:19.231000 7385 generic.go:334] "Generic (PLEG): container finished" podID="16c631c1-277e-47d2-9377-a0bbd14673d4" containerID="e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0" exitCode=0 Mar 19 09:19:19.231490 master-0 kubenswrapper[7385]: I0319 09:19:19.231035 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" event={"ID":"16c631c1-277e-47d2-9377-a0bbd14673d4","Type":"ContainerDied","Data":"e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0"} Mar 19 09:19:19.231490 master-0 kubenswrapper[7385]: I0319 09:19:19.231056 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" event={"ID":"16c631c1-277e-47d2-9377-a0bbd14673d4","Type":"ContainerDied","Data":"715b309f665a655842c51c00d42f465aa6afd85addb3e0939c3cbfa5da354926"} Mar 19 09:19:19.231490 master-0 kubenswrapper[7385]: I0319 09:19:19.231076 7385 scope.go:117] "RemoveContainer" containerID="e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0" Mar 19 09:19:19.231490 master-0 kubenswrapper[7385]: I0319 09:19:19.231178 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d" Mar 19 09:19:19.241167 master-0 kubenswrapper[7385]: I0319 09:19:19.235146 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-var-lock\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.241167 master-0 kubenswrapper[7385]: I0319 09:19:19.235230 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.241167 master-0 kubenswrapper[7385]: I0319 09:19:19.235255 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec98e408-a574-40eb-b84d-111edbaab81a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.248601 master-0 kubenswrapper[7385]: I0319 09:19:19.248524 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=1.24850726 podStartE2EDuration="1.24850726s" podCreationTimestamp="2026-03-19 09:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:19.246796423 +0000 UTC m=+54.921226134" watchObservedRunningTime="2026-03-19 09:19:19.24850726 +0000 UTC m=+54.922936951" Mar 19 09:19:19.251887 master-0 kubenswrapper[7385]: I0319 09:19:19.251847 7385 scope.go:117] "RemoveContainer" containerID="e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0" Mar 19 09:19:19.252254 master-0 kubenswrapper[7385]: E0319 09:19:19.252227 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0\": container with ID starting with e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0 not found: ID does not exist" containerID="e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0" Mar 19 09:19:19.252302 master-0 kubenswrapper[7385]: I0319 09:19:19.252264 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0"} err="failed to get container status \"e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0\": rpc error: code = NotFound desc = could not find container \"e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0\": container with ID starting with e402f0b750cf5b2d297631b713d9617ed8e7f965ae48184711cc1ba9f2b9f0c0 not found: ID does not exist" Mar 19 09:19:19.268782 master-0 kubenswrapper[7385]: I0319 09:19:19.268651 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d"] Mar 19 09:19:19.270873 master-0 kubenswrapper[7385]: I0319 09:19:19.270828 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-vmv8d"] Mar 19 09:19:19.311304 master-0 kubenswrapper[7385]: I0319 09:19:19.311237 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl"] Mar 19 09:19:19.312060 master-0 kubenswrapper[7385]: I0319 09:19:19.312025 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.313791 master-0 kubenswrapper[7385]: I0319 09:19:19.313752 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 09:19:19.315142 master-0 kubenswrapper[7385]: I0319 09:19:19.315104 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 09:19:19.315247 master-0 kubenswrapper[7385]: I0319 09:19:19.315215 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 09:19:19.336468 master-0 kubenswrapper[7385]: I0319 09:19:19.336374 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-var-lock\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.336468 master-0 kubenswrapper[7385]: I0319 09:19:19.336432 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-var-lock\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.336858 master-0 kubenswrapper[7385]: I0319 09:19:19.336685 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.336858 master-0 kubenswrapper[7385]: I0319 09:19:19.336713 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.336858 master-0 kubenswrapper[7385]: I0319 09:19:19.336733 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec98e408-a574-40eb-b84d-111edbaab81a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.336858 master-0 kubenswrapper[7385]: I0319 09:19:19.336749 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.336858 master-0 kubenswrapper[7385]: I0319 09:19:19.336766 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.336858 master-0 kubenswrapper[7385]: I0319 09:19:19.336791 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ded5da9a-1447-46df-a8ff-ffd469562599-kube-api-access\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.337130 master-0 kubenswrapper[7385]: I0319 09:19:19.337063 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.338299 master-0 kubenswrapper[7385]: I0319 09:19:19.338267 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.352124 master-0 kubenswrapper[7385]: I0319 09:19:19.352084 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec98e408-a574-40eb-b84d-111edbaab81a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.353777 master-0 kubenswrapper[7385]: I0319 09:19:19.353751 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-cfmgj_1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/etcd-operator/0.log" Mar 19 09:19:19.438974 master-0 kubenswrapper[7385]: I0319 09:19:19.438358 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.438974 master-0 kubenswrapper[7385]: I0319 09:19:19.438407 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.438974 master-0 kubenswrapper[7385]: I0319 09:19:19.438425 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.438974 master-0 kubenswrapper[7385]: I0319 09:19:19.438451 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ded5da9a-1447-46df-a8ff-ffd469562599-kube-api-access\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.438974 master-0 kubenswrapper[7385]: I0319 09:19:19.438512 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.438974 master-0 kubenswrapper[7385]: I0319 09:19:19.438651 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.438974 master-0 kubenswrapper[7385]: I0319 09:19:19.438864 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.439772 master-0 kubenswrapper[7385]: I0319 09:19:19.439744 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.441775 master-0 kubenswrapper[7385]: I0319 09:19:19.441734 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.454991 master-0 kubenswrapper[7385]: I0319 09:19:19.454894 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ded5da9a-1447-46df-a8ff-ffd469562599-kube-api-access\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.535952 master-0 kubenswrapper[7385]: I0319 09:19:19.535898 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:19:19.547251 master-0 kubenswrapper[7385]: I0319 09:19:19.547208 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 19 09:19:19.640042 master-0 kubenswrapper[7385]: I0319 09:19:19.639996 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:19:19.652165 master-0 kubenswrapper[7385]: W0319 09:19:19.652112 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podded5da9a_1447_46df_a8ff_ffd469562599.slice/crio-39c6768818fedea75d87ad8b7a8640832bffe77cbf3d443982b6c9295adc4865 WatchSource:0}: Error finding container 39c6768818fedea75d87ad8b7a8640832bffe77cbf3d443982b6c9295adc4865: Status 404 returned error can't find the container with id 39c6768818fedea75d87ad8b7a8640832bffe77cbf3d443982b6c9295adc4865 Mar 19 09:19:19.752610 master-0 kubenswrapper[7385]: I0319 09:19:19.752474 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcd/0.log" Mar 19 09:19:19.935445 master-0 kubenswrapper[7385]: I0319 09:19:19.935358 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bb7458647-2hx6x"] Mar 19 09:19:19.936276 master-0 kubenswrapper[7385]: I0319 09:19:19.936214 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:19.942149 master-0 kubenswrapper[7385]: I0319 09:19:19.938041 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 09:19:19.942149 master-0 kubenswrapper[7385]: I0319 09:19:19.938067 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 09:19:19.942149 master-0 kubenswrapper[7385]: I0319 09:19:19.938298 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 09:19:19.945996 master-0 kubenswrapper[7385]: I0319 09:19:19.943217 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 09:19:19.945996 master-0 kubenswrapper[7385]: I0319 09:19:19.944041 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f4f7b3-7f79-4618-b87a-400cadcb9813-serving-cert\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:19.945996 master-0 kubenswrapper[7385]: I0319 09:19:19.944101 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-proxy-ca-bundles\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:19.945996 master-0 kubenswrapper[7385]: I0319 09:19:19.944133 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-config\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:19.945996 master-0 kubenswrapper[7385]: I0319 09:19:19.944299 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhwqs\" (UniqueName: \"kubernetes.io/projected/c1f4f7b3-7f79-4618-b87a-400cadcb9813-kube-api-access-nhwqs\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:19.945996 master-0 kubenswrapper[7385]: I0319 09:19:19.944355 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-client-ca\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:19.945996 master-0 kubenswrapper[7385]: I0319 09:19:19.944621 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 09:19:19.947185 master-0 kubenswrapper[7385]: I0319 09:19:19.946766 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bb7458647-2hx6x"] Mar 19 09:19:19.949370 master-0 kubenswrapper[7385]: I0319 09:19:19.949169 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 09:19:19.950484 master-0 kubenswrapper[7385]: I0319 09:19:19.950444 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_b9969717-8350-416e-8711-877cdf557d81/installer/0.log" Mar 19 09:19:19.963308 master-0 kubenswrapper[7385]: I0319 09:19:19.962001 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 19 09:19:20.045444 master-0 kubenswrapper[7385]: I0319 09:19:20.045307 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhwqs\" (UniqueName: \"kubernetes.io/projected/c1f4f7b3-7f79-4618-b87a-400cadcb9813-kube-api-access-nhwqs\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.045444 master-0 kubenswrapper[7385]: I0319 09:19:20.045372 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-client-ca\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.045759 master-0 kubenswrapper[7385]: I0319 09:19:20.045669 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f4f7b3-7f79-4618-b87a-400cadcb9813-serving-cert\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.045759 master-0 kubenswrapper[7385]: I0319 09:19:20.045722 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-proxy-ca-bundles\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.045759 master-0 kubenswrapper[7385]: I0319 09:19:20.045747 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-config\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.046606 master-0 kubenswrapper[7385]: I0319 09:19:20.046576 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-client-ca\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.046957 master-0 kubenswrapper[7385]: I0319 09:19:20.046922 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-proxy-ca-bundles\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.047061 master-0 kubenswrapper[7385]: I0319 09:19:20.047015 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-config\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.048189 master-0 kubenswrapper[7385]: I0319 09:19:20.048160 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f4f7b3-7f79-4618-b87a-400cadcb9813-serving-cert\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.059824 master-0 kubenswrapper[7385]: I0319 09:19:20.059766 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhwqs\" (UniqueName: \"kubernetes.io/projected/c1f4f7b3-7f79-4618-b87a-400cadcb9813-kube-api-access-nhwqs\") pod \"controller-manager-5bb7458647-2hx6x\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.155924 master-0 kubenswrapper[7385]: I0319 09:19:20.155857 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-f5zsd_46c7cde3-2cb4-4fa8-94ca-d5feff877da9/kube-apiserver-operator/0.log" Mar 19 09:19:20.236079 master-0 kubenswrapper[7385]: I0319 09:19:20.236014 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" event={"ID":"ded5da9a-1447-46df-a8ff-ffd469562599","Type":"ContainerStarted","Data":"39c6768818fedea75d87ad8b7a8640832bffe77cbf3d443982b6c9295adc4865"} Mar 19 09:19:20.260184 master-0 kubenswrapper[7385]: I0319 09:19:20.260135 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:20.314097 master-0 kubenswrapper[7385]: W0319 09:19:20.313991 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podec98e408_a574_40eb_b84d_111edbaab81a.slice/crio-b2d499fdc3d3fa2bc3d6bd17fe41bec26683d20fa2510fec111d840f7bf16b36 WatchSource:0}: Error finding container b2d499fdc3d3fa2bc3d6bd17fe41bec26683d20fa2510fec111d840f7bf16b36: Status 404 returned error can't find the container with id b2d499fdc3d3fa2bc3d6bd17fe41bec26683d20fa2510fec111d840f7bf16b36 Mar 19 09:19:20.348416 master-0 kubenswrapper[7385]: I0319 09:19:20.348132 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 19 09:19:20.536411 master-0 kubenswrapper[7385]: I0319 09:19:20.536365 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c631c1-277e-47d2-9377-a0bbd14673d4" path="/var/lib/kubelet/pods/16c631c1-277e-47d2-9377-a0bbd14673d4/volumes" Mar 19 09:19:20.552191 master-0 kubenswrapper[7385]: I0319 09:19:20.552152 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 19 09:19:20.694015 master-0 kubenswrapper[7385]: I0319 09:19:20.693966 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bb7458647-2hx6x"] Mar 19 09:19:20.749578 master-0 kubenswrapper[7385]: I0319 09:19:20.747731 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 19 09:19:20.944725 master-0 kubenswrapper[7385]: I0319 09:19:20.943692 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 09:19:20.944725 master-0 kubenswrapper[7385]: I0319 09:19:20.943882 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="f130dfec-5c6d-44d4-8a13-112d3ccd4511" containerName="installer" containerID="cri-o://d406fcbf9ee8fa471b4015bf63e32acea0c92f160254c0b2773d8ca3af979b03" gracePeriod=30 Mar 19 09:19:20.956926 master-0 kubenswrapper[7385]: I0319 09:19:20.955801 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-wrdsg_ca2f7cb3-8812-4fe3-83a5-61668ef87f99/kube-controller-manager-operator/0.log" Mar 19 09:19:21.032191 master-0 kubenswrapper[7385]: I0319 09:19:21.032126 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:21.032504 master-0 kubenswrapper[7385]: I0319 09:19:21.032209 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:21.037669 master-0 kubenswrapper[7385]: I0319 09:19:21.037636 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:21.153051 master-0 kubenswrapper[7385]: I0319 09:19:21.153005 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/kube-controller-manager/0.log" Mar 19 09:19:21.245645 master-0 kubenswrapper[7385]: I0319 09:19:21.245555 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ec98e408-a574-40eb-b84d-111edbaab81a","Type":"ContainerStarted","Data":"bad1a4ade656dc88a2ff2cedf66c5fd93d2a5c35714abd9bee1ca36e672bdec3"} Mar 19 09:19:21.245645 master-0 kubenswrapper[7385]: I0319 09:19:21.245602 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ec98e408-a574-40eb-b84d-111edbaab81a","Type":"ContainerStarted","Data":"b2d499fdc3d3fa2bc3d6bd17fe41bec26683d20fa2510fec111d840f7bf16b36"} Mar 19 09:19:21.246691 master-0 kubenswrapper[7385]: I0319 09:19:21.246664 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_f130dfec-5c6d-44d4-8a13-112d3ccd4511/installer/0.log" Mar 19 09:19:21.246728 master-0 kubenswrapper[7385]: I0319 09:19:21.246703 7385 generic.go:334] "Generic (PLEG): container finished" podID="f130dfec-5c6d-44d4-8a13-112d3ccd4511" containerID="d406fcbf9ee8fa471b4015bf63e32acea0c92f160254c0b2773d8ca3af979b03" exitCode=1 Mar 19 09:19:21.246769 master-0 kubenswrapper[7385]: I0319 09:19:21.246752 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f130dfec-5c6d-44d4-8a13-112d3ccd4511","Type":"ContainerDied","Data":"d406fcbf9ee8fa471b4015bf63e32acea0c92f160254c0b2773d8ca3af979b03"} Mar 19 09:19:21.248271 master-0 kubenswrapper[7385]: I0319 09:19:21.248225 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" event={"ID":"24e84d52-ae67-40d0-a2c5-39160b90fa0e","Type":"ContainerStarted","Data":"65027fd5ee877d7eeb4f7d58b9d53307a5dbfb87a18aafd54a59cec2e61bf4d7"} Mar 19 09:19:21.248626 master-0 kubenswrapper[7385]: I0319 09:19:21.248583 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:21.249399 master-0 kubenswrapper[7385]: I0319 09:19:21.249364 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" event={"ID":"c1f4f7b3-7f79-4618-b87a-400cadcb9813","Type":"ContainerStarted","Data":"dd6594e954d08ee67ec8680ad3fdfb434e914edac0a0cdab038657341b6e046d"} Mar 19 09:19:21.253565 master-0 kubenswrapper[7385]: I0319 09:19:21.251516 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" event={"ID":"ded5da9a-1447-46df-a8ff-ffd469562599","Type":"ContainerStarted","Data":"5c0d59f8ce099c748a661f116e21ac9ceeb2f5758bc6d56b40e89d6cb4480b2d"} Mar 19 09:19:21.260092 master-0 kubenswrapper[7385]: I0319 09:19:21.257280 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:19:21.260092 master-0 kubenswrapper[7385]: I0319 09:19:21.259296 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.259283701 podStartE2EDuration="2.259283701s" podCreationTimestamp="2026-03-19 09:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:21.258604792 +0000 UTC m=+56.933034533" watchObservedRunningTime="2026-03-19 09:19:21.259283701 +0000 UTC m=+56.933713402" Mar 19 09:19:21.263463 master-0 kubenswrapper[7385]: I0319 09:19:21.263415 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:19:21.276071 master-0 kubenswrapper[7385]: I0319 09:19:21.275819 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" podStartSLOduration=2.275796488 podStartE2EDuration="2.275796488s" podCreationTimestamp="2026-03-19 09:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:21.274965545 +0000 UTC m=+56.949395276" watchObservedRunningTime="2026-03-19 09:19:21.275796488 +0000 UTC m=+56.950226219" Mar 19 09:19:21.294039 master-0 kubenswrapper[7385]: I0319 09:19:21.293964 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" podStartSLOduration=3.954247689 podStartE2EDuration="6.293946871s" podCreationTimestamp="2026-03-19 09:19:15 +0000 UTC" firstStartedPulling="2026-03-19 09:19:18.019927084 +0000 UTC m=+53.694356775" lastFinishedPulling="2026-03-19 09:19:20.359626256 +0000 UTC m=+56.034055957" observedRunningTime="2026-03-19 09:19:21.293803177 +0000 UTC m=+56.968232898" watchObservedRunningTime="2026-03-19 09:19:21.293946871 +0000 UTC m=+56.968376572" Mar 19 09:19:21.372012 master-0 kubenswrapper[7385]: I0319 09:19:21.371991 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_f130dfec-5c6d-44d4-8a13-112d3ccd4511/installer/0.log" Mar 19 09:19:21.372195 master-0 kubenswrapper[7385]: I0319 09:19:21.372184 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:21.375978 master-0 kubenswrapper[7385]: I0319 09:19:21.375891 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/cluster-policy-controller/0.log" Mar 19 09:19:21.475569 master-0 kubenswrapper[7385]: I0319 09:19:21.473993 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kube-api-access\") pod \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " Mar 19 09:19:21.475569 master-0 kubenswrapper[7385]: I0319 09:19:21.474070 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kubelet-dir\") pod \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " Mar 19 09:19:21.475569 master-0 kubenswrapper[7385]: I0319 09:19:21.474089 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-var-lock\") pod \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\" (UID: \"f130dfec-5c6d-44d4-8a13-112d3ccd4511\") " Mar 19 09:19:21.475569 master-0 kubenswrapper[7385]: I0319 09:19:21.474491 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-var-lock" (OuterVolumeSpecName: "var-lock") pod "f130dfec-5c6d-44d4-8a13-112d3ccd4511" (UID: "f130dfec-5c6d-44d4-8a13-112d3ccd4511"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:21.475569 master-0 kubenswrapper[7385]: I0319 09:19:21.475285 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f130dfec-5c6d-44d4-8a13-112d3ccd4511" (UID: "f130dfec-5c6d-44d4-8a13-112d3ccd4511"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:21.477037 master-0 kubenswrapper[7385]: I0319 09:19:21.476990 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f130dfec-5c6d-44d4-8a13-112d3ccd4511" (UID: "f130dfec-5c6d-44d4-8a13-112d3ccd4511"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:19:21.550889 master-0 kubenswrapper[7385]: I0319 09:19:21.550841 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 19 09:19:21.575703 master-0 kubenswrapper[7385]: I0319 09:19:21.575613 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:21.575703 master-0 kubenswrapper[7385]: I0319 09:19:21.575646 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:21.575703 master-0 kubenswrapper[7385]: I0319 09:19:21.575655 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f130dfec-5c6d-44d4-8a13-112d3ccd4511-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:21.750524 master-0 kubenswrapper[7385]: I0319 09:19:21.750409 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_9120887f-15a9-45e1-846d-dd85a5949ebb/installer/0.log" Mar 19 09:19:21.950026 master-0 kubenswrapper[7385]: I0319 09:19:21.949975 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-8bz9x_53bff8e4-bf60-4386-8905-49d43fd6c420/kube-scheduler-operator-container/0.log" Mar 19 09:19:22.149593 master-0 kubenswrapper[7385]: I0319 09:19:22.149520 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-8mpp9_a57648b5-1a08-49a7-bedb-f7c1e54d92b4/cluster-node-tuning-operator/0.log" Mar 19 09:19:22.257134 master-0 kubenswrapper[7385]: I0319 09:19:22.257095 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_f130dfec-5c6d-44d4-8a13-112d3ccd4511/installer/0.log" Mar 19 09:19:22.257770 master-0 kubenswrapper[7385]: I0319 09:19:22.257742 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 19 09:19:22.259686 master-0 kubenswrapper[7385]: I0319 09:19:22.259662 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f130dfec-5c6d-44d4-8a13-112d3ccd4511","Type":"ContainerDied","Data":"673bf35b216630536f4b4880dc5243bda1d02ee1820a4dfe7ae69ae40082b611"} Mar 19 09:19:22.259788 master-0 kubenswrapper[7385]: I0319 09:19:22.259775 7385 scope.go:117] "RemoveContainer" containerID="d406fcbf9ee8fa471b4015bf63e32acea0c92f160254c0b2773d8ca3af979b03" Mar 19 09:19:22.303592 master-0 kubenswrapper[7385]: I0319 09:19:22.303526 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 09:19:22.305306 master-0 kubenswrapper[7385]: I0319 09:19:22.305261 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 19 09:19:22.349568 master-0 kubenswrapper[7385]: I0319 09:19:22.349500 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-5r5sh_dde1a2d9-a43e-4b26-82d7-e0f83577468f/tuned/0.log" Mar 19 09:19:22.536063 master-0 kubenswrapper[7385]: I0319 09:19:22.535945 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f130dfec-5c6d-44d4-8a13-112d3ccd4511" path="/var/lib/kubelet/pods/f130dfec-5c6d-44d4-8a13-112d3ccd4511/volumes" Mar 19 09:19:22.551082 master-0 kubenswrapper[7385]: I0319 09:19:22.551027 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/kube-rbac-proxy/0.log" Mar 19 09:19:22.752564 master-0 kubenswrapper[7385]: I0319 09:19:22.751190 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/0.log" Mar 19 09:19:22.952452 master-0 kubenswrapper[7385]: I0319 09:19:22.952408 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/0.log" Mar 19 09:19:23.142707 master-0 kubenswrapper[7385]: I0319 09:19:23.142416 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 19 09:19:23.142993 master-0 kubenswrapper[7385]: E0319 09:19:23.142818 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f130dfec-5c6d-44d4-8a13-112d3ccd4511" containerName="installer" Mar 19 09:19:23.142993 master-0 kubenswrapper[7385]: I0319 09:19:23.142836 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="f130dfec-5c6d-44d4-8a13-112d3ccd4511" containerName="installer" Mar 19 09:19:23.142993 master-0 kubenswrapper[7385]: I0319 09:19:23.142958 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="f130dfec-5c6d-44d4-8a13-112d3ccd4511" containerName="installer" Mar 19 09:19:23.143329 master-0 kubenswrapper[7385]: I0319 09:19:23.143313 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.154333 master-0 kubenswrapper[7385]: I0319 09:19:23.154278 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 19 09:19:23.161667 master-0 kubenswrapper[7385]: I0319 09:19:23.156285 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/kube-rbac-proxy/0.log" Mar 19 09:19:23.194184 master-0 kubenswrapper[7385]: I0319 09:19:23.194128 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-var-lock\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.195232 master-0 kubenswrapper[7385]: I0319 09:19:23.194221 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.195232 master-0 kubenswrapper[7385]: I0319 09:19:23.194279 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ba861f-a073-4d60-9136-041c2e98dd0f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.295434 master-0 kubenswrapper[7385]: I0319 09:19:23.295329 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-var-lock\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.295927 master-0 kubenswrapper[7385]: I0319 09:19:23.295461 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.295927 master-0 kubenswrapper[7385]: I0319 09:19:23.295509 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ba861f-a073-4d60-9136-041c2e98dd0f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.297521 master-0 kubenswrapper[7385]: I0319 09:19:23.297484 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-var-lock\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.297607 master-0 kubenswrapper[7385]: I0319 09:19:23.297556 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.320916 master-0 kubenswrapper[7385]: I0319 09:19:23.311504 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ba861f-a073-4d60-9136-041c2e98dd0f-kube-api-access\") pod \"installer-3-master-0\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.347360 master-0 kubenswrapper[7385]: I0319 09:19:23.347310 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-pqxp5_17e0cb4a-e776-4886-927e-ae446af7f234/copy-catalogd-manifests/0.log" Mar 19 09:19:23.471074 master-0 kubenswrapper[7385]: I0319 09:19:23.470937 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:19:23.601646 master-0 kubenswrapper[7385]: I0319 09:19:23.601495 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-pqxp5_17e0cb4a-e776-4886-927e-ae446af7f234/copy-operator-controller-manifests/0.log" Mar 19 09:19:23.750081 master-0 kubenswrapper[7385]: I0319 09:19:23.750042 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-67dcd4998-pqxp5_17e0cb4a-e776-4886-927e-ae446af7f234/cluster-olm-operator/0.log" Mar 19 09:19:23.900939 master-0 kubenswrapper[7385]: I0319 09:19:23.900834 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 19 09:19:23.906623 master-0 kubenswrapper[7385]: W0319 09:19:23.906586 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod20ba861f_a073_4d60_9136_041c2e98dd0f.slice/crio-d9e18fc195cbf5fb27f76f640c42d213bffb004a73cf242e7c9e02beeff1062a WatchSource:0}: Error finding container d9e18fc195cbf5fb27f76f640c42d213bffb004a73cf242e7c9e02beeff1062a: Status 404 returned error can't find the container with id d9e18fc195cbf5fb27f76f640c42d213bffb004a73cf242e7c9e02beeff1062a Mar 19 09:19:23.952709 master-0 kubenswrapper[7385]: I0319 09:19:23.952674 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-hrb9m_70258988-8374-4aee-aaa2-be3c2e853062/openshift-apiserver-operator/0.log" Mar 19 09:19:24.147766 master-0 kubenswrapper[7385]: I0319 09:19:24.147731 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6f6b54748-s5cpx_1669b77c-4bef-42d5-ad0b-63c12a6677b2/fix-audit-permissions/0.log" Mar 19 09:19:24.272663 master-0 kubenswrapper[7385]: I0319 09:19:24.272472 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"20ba861f-a073-4d60-9136-041c2e98dd0f","Type":"ContainerStarted","Data":"d9e18fc195cbf5fb27f76f640c42d213bffb004a73cf242e7c9e02beeff1062a"} Mar 19 09:19:24.355242 master-0 kubenswrapper[7385]: I0319 09:19:24.355207 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6f6b54748-s5cpx_1669b77c-4bef-42d5-ad0b-63c12a6677b2/openshift-apiserver/0.log" Mar 19 09:19:24.551817 master-0 kubenswrapper[7385]: I0319 09:19:24.551784 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-6f6b54748-s5cpx_1669b77c-4bef-42d5-ad0b-63c12a6677b2/openshift-apiserver-check-endpoints/0.log" Mar 19 09:19:24.750116 master-0 kubenswrapper[7385]: I0319 09:19:24.750080 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-cfmgj_1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/etcd-operator/0.log" Mar 19 09:19:24.950421 master-0 kubenswrapper[7385]: I0319 09:19:24.950374 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/0.log" Mar 19 09:19:25.100124 master-0 kubenswrapper[7385]: I0319 09:19:25.100084 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_9120887f-15a9-45e1-846d-dd85a5949ebb/installer/0.log" Mar 19 09:19:25.100636 master-0 kubenswrapper[7385]: I0319 09:19:25.100588 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:19:25.119087 master-0 kubenswrapper[7385]: I0319 09:19:25.119042 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9120887f-15a9-45e1-846d-dd85a5949ebb-kube-api-access\") pod \"9120887f-15a9-45e1-846d-dd85a5949ebb\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " Mar 19 09:19:25.119269 master-0 kubenswrapper[7385]: I0319 09:19:25.119109 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-var-lock\") pod \"9120887f-15a9-45e1-846d-dd85a5949ebb\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " Mar 19 09:19:25.119269 master-0 kubenswrapper[7385]: I0319 09:19:25.119168 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-kubelet-dir\") pod \"9120887f-15a9-45e1-846d-dd85a5949ebb\" (UID: \"9120887f-15a9-45e1-846d-dd85a5949ebb\") " Mar 19 09:19:25.119588 master-0 kubenswrapper[7385]: I0319 09:19:25.119557 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9120887f-15a9-45e1-846d-dd85a5949ebb" (UID: "9120887f-15a9-45e1-846d-dd85a5949ebb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:25.119992 master-0 kubenswrapper[7385]: I0319 09:19:25.119969 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-var-lock" (OuterVolumeSpecName: "var-lock") pod "9120887f-15a9-45e1-846d-dd85a5949ebb" (UID: "9120887f-15a9-45e1-846d-dd85a5949ebb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:25.122311 master-0 kubenswrapper[7385]: I0319 09:19:25.122271 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9120887f-15a9-45e1-846d-dd85a5949ebb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9120887f-15a9-45e1-846d-dd85a5949ebb" (UID: "9120887f-15a9-45e1-846d-dd85a5949ebb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:19:25.222612 master-0 kubenswrapper[7385]: I0319 09:19:25.221359 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:25.222612 master-0 kubenswrapper[7385]: I0319 09:19:25.221406 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9120887f-15a9-45e1-846d-dd85a5949ebb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:25.222612 master-0 kubenswrapper[7385]: I0319 09:19:25.221418 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9120887f-15a9-45e1-846d-dd85a5949ebb-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:25.282574 master-0 kubenswrapper[7385]: I0319 09:19:25.281870 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_9120887f-15a9-45e1-846d-dd85a5949ebb/installer/0.log" Mar 19 09:19:25.282574 master-0 kubenswrapper[7385]: I0319 09:19:25.281915 7385 generic.go:334] "Generic (PLEG): container finished" podID="9120887f-15a9-45e1-846d-dd85a5949ebb" containerID="c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d" exitCode=1 Mar 19 09:19:25.282574 master-0 kubenswrapper[7385]: I0319 09:19:25.282127 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9120887f-15a9-45e1-846d-dd85a5949ebb","Type":"ContainerDied","Data":"c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d"} Mar 19 09:19:25.282574 master-0 kubenswrapper[7385]: I0319 09:19:25.282156 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9120887f-15a9-45e1-846d-dd85a5949ebb","Type":"ContainerDied","Data":"fc86f8c86cd7588ac0d5a124324c5dfeabbc5d914701e9c7cc4367a57ec98e9a"} Mar 19 09:19:25.282574 master-0 kubenswrapper[7385]: I0319 09:19:25.282198 7385 scope.go:117] "RemoveContainer" containerID="c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d" Mar 19 09:19:25.282574 master-0 kubenswrapper[7385]: I0319 09:19:25.282315 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 19 09:19:25.288529 master-0 kubenswrapper[7385]: I0319 09:19:25.286943 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" event={"ID":"c1f4f7b3-7f79-4618-b87a-400cadcb9813","Type":"ContainerStarted","Data":"0efec46299b67a0eea4b13ca67058dc6945af55d88748d9fe42464dc879df463"} Mar 19 09:19:25.288529 master-0 kubenswrapper[7385]: I0319 09:19:25.287480 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:25.292422 master-0 kubenswrapper[7385]: I0319 09:19:25.290388 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"20ba861f-a073-4d60-9136-041c2e98dd0f","Type":"ContainerStarted","Data":"ab53721c199f233bd43c54da36cf0743a555ab62518f114872a0db72d2d2af5a"} Mar 19 09:19:25.293980 master-0 kubenswrapper[7385]: I0319 09:19:25.293893 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:19:25.297358 master-0 kubenswrapper[7385]: I0319 09:19:25.297318 7385 scope.go:117] "RemoveContainer" containerID="c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d" Mar 19 09:19:25.298292 master-0 kubenswrapper[7385]: E0319 09:19:25.298250 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d\": container with ID starting with c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d not found: ID does not exist" containerID="c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d" Mar 19 09:19:25.298398 master-0 kubenswrapper[7385]: I0319 09:19:25.298293 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d"} err="failed to get container status \"c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d\": rpc error: code = NotFound desc = could not find container \"c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d\": container with ID starting with c6e87e2ccf7d8c8465bd4989425b055bfb840717d0eaf9ada6c65d0c7bd0657d not found: ID does not exist" Mar 19 09:19:25.306366 master-0 kubenswrapper[7385]: I0319 09:19:25.306298 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.306282461 podStartE2EDuration="2.306282461s" podCreationTimestamp="2026-03-19 09:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:19:25.305451297 +0000 UTC m=+60.979880998" watchObservedRunningTime="2026-03-19 09:19:25.306282461 +0000 UTC m=+60.980712182" Mar 19 09:19:25.319879 master-0 kubenswrapper[7385]: I0319 09:19:25.319717 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" podStartSLOduration=6.137777804 podStartE2EDuration="10.31969451s" podCreationTimestamp="2026-03-19 09:19:15 +0000 UTC" firstStartedPulling="2026-03-19 09:19:20.704654435 +0000 UTC m=+56.379084136" lastFinishedPulling="2026-03-19 09:19:24.886571141 +0000 UTC m=+60.561000842" observedRunningTime="2026-03-19 09:19:25.317270632 +0000 UTC m=+60.991700333" watchObservedRunningTime="2026-03-19 09:19:25.31969451 +0000 UTC m=+60.994124211" Mar 19 09:19:25.355506 master-0 kubenswrapper[7385]: I0319 09:19:25.355431 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 09:19:25.358052 master-0 kubenswrapper[7385]: I0319 09:19:25.358004 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 19 09:19:25.359466 master-0 kubenswrapper[7385]: I0319 09:19:25.359426 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-f897ddc75-l2pbj_24e84d52-ae67-40d0-a2c5-39160b90fa0e/route-controller-manager/0.log" Mar 19 09:19:26.537519 master-0 kubenswrapper[7385]: I0319 09:19:26.537476 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9120887f-15a9-45e1-846d-dd85a5949ebb" path="/var/lib/kubelet/pods/9120887f-15a9-45e1-846d-dd85a5949ebb/volumes" Mar 19 09:19:27.686820 master-0 kubenswrapper[7385]: I0319 09:19:27.686672 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9xr8p" Mar 19 09:19:31.250917 master-0 kubenswrapper[7385]: I0319 09:19:31.250850 7385 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 09:19:31.251783 master-0 kubenswrapper[7385]: I0319 09:19:31.251108 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc" gracePeriod=30 Mar 19 09:19:31.251783 master-0 kubenswrapper[7385]: I0319 09:19:31.251178 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1" gracePeriod=30 Mar 19 09:19:31.280022 master-0 kubenswrapper[7385]: I0319 09:19:31.279935 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 19 09:19:31.280418 master-0 kubenswrapper[7385]: E0319 09:19:31.280374 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 19 09:19:31.280485 master-0 kubenswrapper[7385]: I0319 09:19:31.280416 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 19 09:19:31.280485 master-0 kubenswrapper[7385]: E0319 09:19:31.280458 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9120887f-15a9-45e1-846d-dd85a5949ebb" containerName="installer" Mar 19 09:19:31.280485 master-0 kubenswrapper[7385]: I0319 09:19:31.280478 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="9120887f-15a9-45e1-846d-dd85a5949ebb" containerName="installer" Mar 19 09:19:31.280646 master-0 kubenswrapper[7385]: E0319 09:19:31.280505 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 19 09:19:31.280646 master-0 kubenswrapper[7385]: I0319 09:19:31.280525 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 19 09:19:31.280820 master-0 kubenswrapper[7385]: I0319 09:19:31.280778 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="9120887f-15a9-45e1-846d-dd85a5949ebb" containerName="installer" Mar 19 09:19:31.280875 master-0 kubenswrapper[7385]: I0319 09:19:31.280819 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 19 09:19:31.280875 master-0 kubenswrapper[7385]: I0319 09:19:31.280851 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 19 09:19:31.284262 master-0 kubenswrapper[7385]: I0319 09:19:31.284215 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.394702 master-0 kubenswrapper[7385]: I0319 09:19:31.394614 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.394961 master-0 kubenswrapper[7385]: I0319 09:19:31.394718 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.394961 master-0 kubenswrapper[7385]: I0319 09:19:31.394835 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.394961 master-0 kubenswrapper[7385]: I0319 09:19:31.394936 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.395153 master-0 kubenswrapper[7385]: I0319 09:19:31.395020 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.395300 master-0 kubenswrapper[7385]: I0319 09:19:31.395228 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496090 master-0 kubenswrapper[7385]: I0319 09:19:31.496008 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496306 master-0 kubenswrapper[7385]: I0319 09:19:31.496222 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496358 master-0 kubenswrapper[7385]: I0319 09:19:31.496331 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496437 master-0 kubenswrapper[7385]: I0319 09:19:31.496404 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496494 master-0 kubenswrapper[7385]: I0319 09:19:31.496455 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496571 master-0 kubenswrapper[7385]: I0319 09:19:31.496507 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496571 master-0 kubenswrapper[7385]: I0319 09:19:31.496518 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496698 master-0 kubenswrapper[7385]: I0319 09:19:31.496624 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496749 master-0 kubenswrapper[7385]: I0319 09:19:31.496688 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496793 master-0 kubenswrapper[7385]: I0319 09:19:31.496743 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496837 master-0 kubenswrapper[7385]: I0319 09:19:31.496792 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:31.496979 master-0 kubenswrapper[7385]: I0319 09:19:31.496942 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:19:32.006245 master-0 kubenswrapper[7385]: I0319 09:19:32.006177 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:19:32.006245 master-0 kubenswrapper[7385]: I0319 09:19:32.006242 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:19:32.006699 master-0 kubenswrapper[7385]: I0319 09:19:32.006652 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:19:32.006823 master-0 kubenswrapper[7385]: I0319 09:19:32.006804 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:19:32.006998 master-0 kubenswrapper[7385]: I0319 09:19:32.006974 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:19:32.007689 master-0 kubenswrapper[7385]: I0319 09:19:32.007649 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:19:32.007752 master-0 kubenswrapper[7385]: I0319 09:19:32.007724 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:19:32.007840 master-0 kubenswrapper[7385]: I0319 09:19:32.007804 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:19:32.008016 master-0 kubenswrapper[7385]: I0319 09:19:32.007973 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:19:32.008581 master-0 kubenswrapper[7385]: I0319 09:19:32.008432 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:19:32.008581 master-0 kubenswrapper[7385]: I0319 09:19:32.008478 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:19:32.008581 master-0 kubenswrapper[7385]: I0319 09:19:32.008570 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:19:32.010517 master-0 kubenswrapper[7385]: I0319 09:19:32.009922 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:19:32.010517 master-0 kubenswrapper[7385]: I0319 09:19:32.010138 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:19:32.011250 master-0 kubenswrapper[7385]: I0319 09:19:32.011077 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:19:32.011782 master-0 kubenswrapper[7385]: I0319 09:19:32.011464 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:19:32.011782 master-0 kubenswrapper[7385]: I0319 09:19:32.011473 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:19:32.012071 master-0 kubenswrapper[7385]: I0319 09:19:32.011991 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-q4rkm\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:19:32.012297 master-0 kubenswrapper[7385]: I0319 09:19:32.012250 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:19:32.014677 master-0 kubenswrapper[7385]: I0319 09:19:32.013117 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:19:32.014677 master-0 kubenswrapper[7385]: I0319 09:19:32.013278 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:19:32.014677 master-0 kubenswrapper[7385]: I0319 09:19:32.013438 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:19:32.014677 master-0 kubenswrapper[7385]: I0319 09:19:32.013629 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:19:32.015630 master-0 kubenswrapper[7385]: I0319 09:19:32.015565 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:19:32.066603 master-0 kubenswrapper[7385]: I0319 09:19:32.066520 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:19:32.066603 master-0 kubenswrapper[7385]: I0319 09:19:32.066555 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:19:32.066937 master-0 kubenswrapper[7385]: I0319 09:19:32.066880 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:19:32.067997 master-0 kubenswrapper[7385]: I0319 09:19:32.067956 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:19:32.072767 master-0 kubenswrapper[7385]: I0319 09:19:32.072721 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:19:32.072922 master-0 kubenswrapper[7385]: I0319 09:19:32.072843 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:19:32.073019 master-0 kubenswrapper[7385]: I0319 09:19:32.072980 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:19:32.073092 master-0 kubenswrapper[7385]: I0319 09:19:32.073066 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:19:32.073287 master-0 kubenswrapper[7385]: I0319 09:19:32.073249 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:19:32.075025 master-0 kubenswrapper[7385]: I0319 09:19:32.074988 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:19:32.075581 master-0 kubenswrapper[7385]: I0319 09:19:32.075531 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:19:44.315466 master-0 kubenswrapper[7385]: E0319 09:19:44.315388 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 19 09:19:44.316267 master-0 kubenswrapper[7385]: I0319 09:19:44.315929 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 09:19:44.339190 master-0 kubenswrapper[7385]: W0319 09:19:44.339094 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-ab26ca12d4a9a789cb405f83bb35f6e30e61a861bce5e0fb3e02e8d5c61ee41c WatchSource:0}: Error finding container ab26ca12d4a9a789cb405f83bb35f6e30e61a861bce5e0fb3e02e8d5c61ee41c: Status 404 returned error can't find the container with id ab26ca12d4a9a789cb405f83bb35f6e30e61a861bce5e0fb3e02e8d5c61ee41c Mar 19 09:19:44.380962 master-0 kubenswrapper[7385]: I0319 09:19:44.380908 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"ab26ca12d4a9a789cb405f83bb35f6e30e61a861bce5e0fb3e02e8d5c61ee41c"} Mar 19 09:19:45.386702 master-0 kubenswrapper[7385]: I0319 09:19:45.386610 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="6c34410d1b933db4369369ed45b2834d81e2f45432196f8498337d329dbd86c7" exitCode=0 Mar 19 09:19:45.386702 master-0 kubenswrapper[7385]: I0319 09:19:45.386675 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"6c34410d1b933db4369369ed45b2834d81e2f45432196f8498337d329dbd86c7"} Mar 19 09:19:45.390512 master-0 kubenswrapper[7385]: I0319 09:19:45.389428 7385 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="e936e2d314dab9154842440cf41e00874f26fcc073cf860d24367374f28b489d" exitCode=1 Mar 19 09:19:45.390512 master-0 kubenswrapper[7385]: I0319 09:19:45.389492 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"e936e2d314dab9154842440cf41e00874f26fcc073cf860d24367374f28b489d"} Mar 19 09:19:45.390512 master-0 kubenswrapper[7385]: I0319 09:19:45.390102 7385 scope.go:117] "RemoveContainer" containerID="e936e2d314dab9154842440cf41e00874f26fcc073cf860d24367374f28b489d" Mar 19 09:19:45.421482 master-0 kubenswrapper[7385]: E0319 09:19:45.421450 7385 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podb9969717_8350_416e_8711_877cdf557d81.slice/crio-e03d771886973476dcc44da1c43c397db09c499968945f5153359a0c06bc98ab.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:19:46.701216 master-0 kubenswrapper[7385]: I0319 09:19:46.701155 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:19:46.702123 master-0 kubenswrapper[7385]: E0319 09:19:46.701224 7385 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 19 09:19:46.713057 master-0 kubenswrapper[7385]: I0319 09:19:46.712698 7385 generic.go:334] "Generic (PLEG): container finished" podID="b9969717-8350-416e-8711-877cdf557d81" containerID="e03d771886973476dcc44da1c43c397db09c499968945f5153359a0c06bc98ab" exitCode=0 Mar 19 09:19:46.717424 master-0 kubenswrapper[7385]: I0319 09:19:46.713232 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"b9969717-8350-416e-8711-877cdf557d81","Type":"ContainerDied","Data":"e03d771886973476dcc44da1c43c397db09c499968945f5153359a0c06bc98ab"} Mar 19 09:19:46.720190 master-0 kubenswrapper[7385]: I0319 09:19:46.720149 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"58961d9e0be486e46715cb6bf5872c6474bbf247fb8ed12ba8931d59b7f9e590"} Mar 19 09:19:47.726326 master-0 kubenswrapper[7385]: I0319 09:19:47.726263 7385 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="0fa64701f5e06185b54d04000e8eff35b5351d75655dd3a6eb6ffaa3f06a93bd" exitCode=1 Mar 19 09:19:47.726884 master-0 kubenswrapper[7385]: I0319 09:19:47.726351 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"0fa64701f5e06185b54d04000e8eff35b5351d75655dd3a6eb6ffaa3f06a93bd"} Mar 19 09:19:47.727204 master-0 kubenswrapper[7385]: I0319 09:19:47.727125 7385 scope.go:117] "RemoveContainer" containerID="0fa64701f5e06185b54d04000e8eff35b5351d75655dd3a6eb6ffaa3f06a93bd" Mar 19 09:19:48.005769 master-0 kubenswrapper[7385]: I0319 09:19:48.005722 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 09:19:48.124752 master-0 kubenswrapper[7385]: I0319 09:19:48.124690 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-kubelet-dir\") pod \"b9969717-8350-416e-8711-877cdf557d81\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " Mar 19 09:19:48.125039 master-0 kubenswrapper[7385]: I0319 09:19:48.124829 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9969717-8350-416e-8711-877cdf557d81-kube-api-access\") pod \"b9969717-8350-416e-8711-877cdf557d81\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " Mar 19 09:19:48.125039 master-0 kubenswrapper[7385]: I0319 09:19:48.124857 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-var-lock\") pod \"b9969717-8350-416e-8711-877cdf557d81\" (UID: \"b9969717-8350-416e-8711-877cdf557d81\") " Mar 19 09:19:48.125168 master-0 kubenswrapper[7385]: I0319 09:19:48.125104 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-var-lock" (OuterVolumeSpecName: "var-lock") pod "b9969717-8350-416e-8711-877cdf557d81" (UID: "b9969717-8350-416e-8711-877cdf557d81"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:48.125168 master-0 kubenswrapper[7385]: I0319 09:19:48.125131 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b9969717-8350-416e-8711-877cdf557d81" (UID: "b9969717-8350-416e-8711-877cdf557d81"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:19:48.128192 master-0 kubenswrapper[7385]: I0319 09:19:48.128132 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9969717-8350-416e-8711-877cdf557d81-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b9969717-8350-416e-8711-877cdf557d81" (UID: "b9969717-8350-416e-8711-877cdf557d81"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:19:48.226709 master-0 kubenswrapper[7385]: I0319 09:19:48.226641 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9969717-8350-416e-8711-877cdf557d81-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:48.226709 master-0 kubenswrapper[7385]: I0319 09:19:48.226683 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:48.226709 master-0 kubenswrapper[7385]: I0319 09:19:48.226693 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9969717-8350-416e-8711-877cdf557d81-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:19:48.732751 master-0 kubenswrapper[7385]: I0319 09:19:48.732683 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"b9969717-8350-416e-8711-877cdf557d81","Type":"ContainerDied","Data":"98b4484c29bf71462f8aa83a2438a018a65a72efc3ab1ad01ecc3b27224d1c48"} Mar 19 09:19:48.732751 master-0 kubenswrapper[7385]: I0319 09:19:48.732746 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b4484c29bf71462f8aa83a2438a018a65a72efc3ab1ad01ecc3b27224d1c48" Mar 19 09:19:48.733700 master-0 kubenswrapper[7385]: I0319 09:19:48.733023 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 09:19:48.734583 master-0 kubenswrapper[7385]: I0319 09:19:48.734510 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"64562b405a3862b5592fdd93f8c95623b24024a5e23281d2b69f8ff3942c63c6"} Mar 19 09:19:51.749437 master-0 kubenswrapper[7385]: I0319 09:19:51.749359 7385 generic.go:334] "Generic (PLEG): container finished" podID="70e8c62b-97c3-4c0c-85d3-f660118831fd" containerID="13eaf9fb6b5973dc7a39cf4a595a1daae2d0c0b608e70d2c41f378466d42eb35" exitCode=0 Mar 19 09:19:51.749437 master-0 kubenswrapper[7385]: I0319 09:19:51.749402 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerDied","Data":"13eaf9fb6b5973dc7a39cf4a595a1daae2d0c0b608e70d2c41f378466d42eb35"} Mar 19 09:19:51.750287 master-0 kubenswrapper[7385]: I0319 09:19:51.749821 7385 scope.go:117] "RemoveContainer" containerID="13eaf9fb6b5973dc7a39cf4a595a1daae2d0c0b608e70d2c41f378466d42eb35" Mar 19 09:19:51.856631 master-0 kubenswrapper[7385]: I0319 09:19:51.856585 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:19:52.502578 master-0 kubenswrapper[7385]: I0319 09:19:52.502448 7385 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-cfmgj container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 19 09:19:52.502578 master-0 kubenswrapper[7385]: I0319 09:19:52.502503 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" podUID="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 19 09:19:52.788976 master-0 kubenswrapper[7385]: I0319 09:19:52.788828 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerStarted","Data":"07f85a8394cfe2927824d6dd40beca1cf31136db472d1b09c7b6f5f1e6dae94f"} Mar 19 09:19:53.665951 master-0 kubenswrapper[7385]: I0319 09:19:53.665886 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:19:54.857506 master-0 kubenswrapper[7385]: I0319 09:19:54.857414 7385 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:19:55.827208 master-0 kubenswrapper[7385]: I0319 09:19:55.827131 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/0.log" Mar 19 09:19:55.827491 master-0 kubenswrapper[7385]: I0319 09:19:55.827224 7385 generic.go:334] "Generic (PLEG): container finished" podID="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" containerID="edc97cab8d1c4b85265dcfce231bf29161c0caac67a28ad74d915ec1fff0a681" exitCode=1 Mar 19 09:19:55.827491 master-0 kubenswrapper[7385]: I0319 09:19:55.827272 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" event={"ID":"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff","Type":"ContainerDied","Data":"edc97cab8d1c4b85265dcfce231bf29161c0caac67a28ad74d915ec1fff0a681"} Mar 19 09:19:55.827939 master-0 kubenswrapper[7385]: I0319 09:19:55.827878 7385 scope.go:117] "RemoveContainer" containerID="edc97cab8d1c4b85265dcfce231bf29161c0caac67a28ad74d915ec1fff0a681" Mar 19 09:19:56.051280 master-0 kubenswrapper[7385]: E0319 09:19:56.050965 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:19:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:19:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:19:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:19:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27\\\"],\\\"sizeBytes\\\":504662731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:19:56.702181 master-0 kubenswrapper[7385]: E0319 09:19:56.702062 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:19:56.836761 master-0 kubenswrapper[7385]: I0319 09:19:56.836666 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/0.log" Mar 19 09:19:56.836761 master-0 kubenswrapper[7385]: I0319 09:19:56.836754 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" event={"ID":"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff","Type":"ContainerStarted","Data":"620239dc4a60804d8418bde885755ec6483c00980113b997aa1fddf56697d09e"} Mar 19 09:19:58.392708 master-0 kubenswrapper[7385]: E0319 09:19:58.392630 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 19 09:19:58.849641 master-0 kubenswrapper[7385]: I0319 09:19:58.849530 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="9d6f4e81b24bbc088f03886bb58933c7482c216dc5c189aa0267f9e14838f10a" exitCode=0 Mar 19 09:19:58.849641 master-0 kubenswrapper[7385]: I0319 09:19:58.849657 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"9d6f4e81b24bbc088f03886bb58933c7482c216dc5c189aa0267f9e14838f10a"} Mar 19 09:19:58.851624 master-0 kubenswrapper[7385]: I0319 09:19:58.851470 7385 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1" exitCode=0 Mar 19 09:20:01.367712 master-0 kubenswrapper[7385]: I0319 09:20:01.367665 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 19 09:20:01.368485 master-0 kubenswrapper[7385]: I0319 09:20:01.367774 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:20:01.495749 master-0 kubenswrapper[7385]: I0319 09:20:01.495667 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 19 09:20:01.496024 master-0 kubenswrapper[7385]: I0319 09:20:01.495814 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 19 09:20:01.496024 master-0 kubenswrapper[7385]: I0319 09:20:01.495806 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:20:01.496194 master-0 kubenswrapper[7385]: I0319 09:20:01.496021 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:20:01.496331 master-0 kubenswrapper[7385]: I0319 09:20:01.496282 7385 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:01.496434 master-0 kubenswrapper[7385]: I0319 09:20:01.496332 7385 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:01.868305 master-0 kubenswrapper[7385]: I0319 09:20:01.868231 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 19 09:20:01.868619 master-0 kubenswrapper[7385]: I0319 09:20:01.868310 7385 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc" exitCode=137 Mar 19 09:20:01.868619 master-0 kubenswrapper[7385]: I0319 09:20:01.868378 7385 scope.go:117] "RemoveContainer" containerID="cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1" Mar 19 09:20:01.868619 master-0 kubenswrapper[7385]: I0319 09:20:01.868396 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:20:01.897836 master-0 kubenswrapper[7385]: I0319 09:20:01.897769 7385 scope.go:117] "RemoveContainer" containerID="5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc" Mar 19 09:20:01.913367 master-0 kubenswrapper[7385]: I0319 09:20:01.913306 7385 scope.go:117] "RemoveContainer" containerID="cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1" Mar 19 09:20:01.914102 master-0 kubenswrapper[7385]: E0319 09:20:01.914030 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1\": container with ID starting with cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1 not found: ID does not exist" containerID="cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1" Mar 19 09:20:01.914225 master-0 kubenswrapper[7385]: I0319 09:20:01.914115 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1"} err="failed to get container status \"cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1\": rpc error: code = NotFound desc = could not find container \"cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1\": container with ID starting with cfe97e56b6302f49cdcf2f84debe6109f6a9b8777ac590f301bb5f710cbf1bd1 not found: ID does not exist" Mar 19 09:20:01.914225 master-0 kubenswrapper[7385]: I0319 09:20:01.914158 7385 scope.go:117] "RemoveContainer" containerID="5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc" Mar 19 09:20:01.914874 master-0 kubenswrapper[7385]: E0319 09:20:01.914815 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc\": container with ID starting with 5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc not found: ID does not exist" containerID="5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc" Mar 19 09:20:01.914979 master-0 kubenswrapper[7385]: I0319 09:20:01.914873 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc"} err="failed to get container status \"5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc\": rpc error: code = NotFound desc = could not find container \"5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc\": container with ID starting with 5fbb15cb83a9786e5416821d225bff9ef8a3aff3d3aac461cdfc21915ac457cc not found: ID does not exist" Mar 19 09:20:02.536981 master-0 kubenswrapper[7385]: I0319 09:20:02.536918 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 19 09:20:02.537918 master-0 kubenswrapper[7385]: I0319 09:20:02.537300 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 09:20:03.886188 master-0 kubenswrapper[7385]: I0319 09:20:03.886101 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-gkvf5_bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/network-operator/0.log" Mar 19 09:20:03.886188 master-0 kubenswrapper[7385]: I0319 09:20:03.886160 7385 generic.go:334] "Generic (PLEG): container finished" podID="bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c" containerID="794857861e41452767f8150da770c0fdb6415a1b4c58da2ca5c6bb1b5694eb77" exitCode=255 Mar 19 09:20:03.888849 master-0 kubenswrapper[7385]: I0319 09:20:03.888801 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_e3ab0802-da8a-475c-a707-09f7838f580b/installer/0.log" Mar 19 09:20:03.888973 master-0 kubenswrapper[7385]: I0319 09:20:03.888859 7385 generic.go:334] "Generic (PLEG): container finished" podID="e3ab0802-da8a-475c-a707-09f7838f580b" containerID="a1c35003004ca85e3194260594ce7980c9cfead4c46c7a6e5e65ede51128fa87" exitCode=1 Mar 19 09:20:04.857082 master-0 kubenswrapper[7385]: I0319 09:20:04.856989 7385 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:05.260713 master-0 kubenswrapper[7385]: E0319 09:20:05.260500 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e338a0d3cc6f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:19:31.251144437 +0000 UTC m=+66.925574158,LastTimestamp:2026-03-19 09:19:31.251144437 +0000 UTC m=+66.925574158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:20:05.900215 master-0 kubenswrapper[7385]: I0319 09:20:05.900167 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ec98e408-a574-40eb-b84d-111edbaab81a/installer/0.log" Mar 19 09:20:05.900519 master-0 kubenswrapper[7385]: I0319 09:20:05.900483 7385 generic.go:334] "Generic (PLEG): container finished" podID="ec98e408-a574-40eb-b84d-111edbaab81a" containerID="bad1a4ade656dc88a2ff2cedf66c5fd93d2a5c35714abd9bee1ca36e672bdec3" exitCode=1 Mar 19 09:20:06.051886 master-0 kubenswrapper[7385]: E0319 09:20:06.051808 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:06.705650 master-0 kubenswrapper[7385]: E0319 09:20:06.703059 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:09.923585 master-0 kubenswrapper[7385]: I0319 09:20:09.923512 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_20ba861f-a073-4d60-9136-041c2e98dd0f/installer/0.log" Mar 19 09:20:09.924229 master-0 kubenswrapper[7385]: I0319 09:20:09.923606 7385 generic.go:334] "Generic (PLEG): container finished" podID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerID="ab53721c199f233bd43c54da36cf0743a555ab62518f114872a0db72d2d2af5a" exitCode=1 Mar 19 09:20:11.855969 master-0 kubenswrapper[7385]: E0319 09:20:11.855931 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 19 09:20:12.997615 master-0 kubenswrapper[7385]: I0319 09:20:12.997559 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="f969fe9873a3954169d30a02594ff223c659b89547ce589e4efba58ec438e923" exitCode=0 Mar 19 09:20:14.856670 master-0 kubenswrapper[7385]: I0319 09:20:14.856509 7385 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:16.052632 master-0 kubenswrapper[7385]: E0319 09:20:16.052165 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:16.704493 master-0 kubenswrapper[7385]: E0319 09:20:16.704335 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:19.046527 master-0 kubenswrapper[7385]: I0319 09:20:19.046398 7385 generic.go:334] "Generic (PLEG): container finished" podID="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" containerID="e9208fca3070b80809292873e901e7513b6e0cbe29792fde8a62dcde9ce791be" exitCode=0 Mar 19 09:20:25.073413 master-0 kubenswrapper[7385]: I0319 09:20:25.073368 7385 generic.go:334] "Generic (PLEG): container finished" podID="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" containerID="13d37b6e0fd525b422b8c24e6c520e3e647d99050d3e3d8fce7cd4856511e27f" exitCode=0 Mar 19 09:20:25.075805 master-0 kubenswrapper[7385]: I0319 09:20:25.075787 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-t7zwh_47da8964-3606-4181-87fb-8f04a3065295/approver/0.log" Mar 19 09:20:25.076471 master-0 kubenswrapper[7385]: I0319 09:20:25.076431 7385 generic.go:334] "Generic (PLEG): container finished" podID="47da8964-3606-4181-87fb-8f04a3065295" containerID="9b3fc8a626e0487acce62c5d3181f8201f7287976a42754235b1309dbd2babb2" exitCode=1 Mar 19 09:20:26.002074 master-0 kubenswrapper[7385]: E0319 09:20:26.002021 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 19 09:20:26.052941 master-0 kubenswrapper[7385]: E0319 09:20:26.052879 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:26.708572 master-0 kubenswrapper[7385]: E0319 09:20:26.705651 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:26.708572 master-0 kubenswrapper[7385]: I0319 09:20:26.705703 7385 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 09:20:30.109177 master-0 kubenswrapper[7385]: I0319 09:20:30.109064 7385 generic.go:334] "Generic (PLEG): container finished" podID="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" containerID="3335c7fc18f5f7e2694a86064d55e2221326f9866ff420531a852d42c29d0c0d" exitCode=0 Mar 19 09:20:31.114531 master-0 kubenswrapper[7385]: I0319 09:20:31.114377 7385 generic.go:334] "Generic (PLEG): container finished" podID="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" containerID="7f84fbd703825db689c03d2baee5e05e0406b0c7857947e23dfe9649aed6fbc3" exitCode=0 Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: E0319 09:20:32.737258 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84d549f6d5-4wv72_openshift-machine-config-operator_c222998f-6211-4466-8ad7-5d9fcfb10789_0(a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a): error adding pod openshift-machine-config-operator_machine-config-operator-84d549f6d5-4wv72 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a" Netns:"/var/run/netns/17ad450f-cfbd-46e4-ba8f-17d0842aac48" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84d549f6d5-4wv72;K8S_POD_INFRA_CONTAINER_ID=a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a;K8S_POD_UID=c222998f-6211-4466-8ad7-5d9fcfb10789" Path:"" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72/c222998f-6211-4466-8ad7-5d9fcfb10789]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84d549f6d5-4wv72?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: > Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: E0319 09:20:32.737375 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84d549f6d5-4wv72_openshift-machine-config-operator_c222998f-6211-4466-8ad7-5d9fcfb10789_0(a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a): error adding pod openshift-machine-config-operator_machine-config-operator-84d549f6d5-4wv72 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a" Netns:"/var/run/netns/17ad450f-cfbd-46e4-ba8f-17d0842aac48" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84d549f6d5-4wv72;K8S_POD_INFRA_CONTAINER_ID=a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a;K8S_POD_UID=c222998f-6211-4466-8ad7-5d9fcfb10789" Path:"" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72/c222998f-6211-4466-8ad7-5d9fcfb10789]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84d549f6d5-4wv72?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: > pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: E0319 09:20:32.737409 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84d549f6d5-4wv72_openshift-machine-config-operator_c222998f-6211-4466-8ad7-5d9fcfb10789_0(a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a): error adding pod openshift-machine-config-operator_machine-config-operator-84d549f6d5-4wv72 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a" Netns:"/var/run/netns/17ad450f-cfbd-46e4-ba8f-17d0842aac48" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84d549f6d5-4wv72;K8S_POD_INFRA_CONTAINER_ID=a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a;K8S_POD_UID=c222998f-6211-4466-8ad7-5d9fcfb10789" Path:"" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72/c222998f-6211-4466-8ad7-5d9fcfb10789]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84d549f6d5-4wv72?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: > pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:20:32.741762 master-0 kubenswrapper[7385]: E0319 09:20:32.737527 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"machine-config-operator-84d549f6d5-4wv72_openshift-machine-config-operator(c222998f-6211-4466-8ad7-5d9fcfb10789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"machine-config-operator-84d549f6d5-4wv72_openshift-machine-config-operator(c222998f-6211-4466-8ad7-5d9fcfb10789)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-config-operator-84d549f6d5-4wv72_openshift-machine-config-operator_c222998f-6211-4466-8ad7-5d9fcfb10789_0(a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a): error adding pod openshift-machine-config-operator_machine-config-operator-84d549f6d5-4wv72 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a\\\" Netns:\\\"/var/run/netns/17ad450f-cfbd-46e4-ba8f-17d0842aac48\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-config-operator;K8S_POD_NAME=machine-config-operator-84d549f6d5-4wv72;K8S_POD_INFRA_CONTAINER_ID=a85356cf8505359091fb88125f0536a85b490d00e110662ac20ccaded8b6e19a;K8S_POD_UID=c222998f-6211-4466-8ad7-5d9fcfb10789\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72] networking: Multus: [openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72/c222998f-6211-4466-8ad7-5d9fcfb10789]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: SetNetworkStatus: failed to update the pod machine-config-operator-84d549f6d5-4wv72 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84d549f6d5-4wv72?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" podUID="c222998f-6211-4466-8ad7-5d9fcfb10789" Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: E0319 09:20:32.847328 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-52j2b_openshift-operator-lifecycle-manager_e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc_0(738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-52j2b to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a" Netns:"/var/run/netns/a50445df-4ba3-4907-8d20-b4f8f5c3809b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-52j2b;K8S_POD_INFRA_CONTAINER_ID=738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a;K8S_POD_UID=e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-52j2b?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: > Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: E0319 09:20:32.847392 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-52j2b_openshift-operator-lifecycle-manager_e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc_0(738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-52j2b to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a" Netns:"/var/run/netns/a50445df-4ba3-4907-8d20-b4f8f5c3809b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-52j2b;K8S_POD_INFRA_CONTAINER_ID=738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a;K8S_POD_UID=e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-52j2b?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: > pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: E0319 09:20:32.847411 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-52j2b_openshift-operator-lifecycle-manager_e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc_0(738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-52j2b to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a" Netns:"/var/run/netns/a50445df-4ba3-4907-8d20-b4f8f5c3809b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-52j2b;K8S_POD_INFRA_CONTAINER_ID=738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a;K8S_POD_UID=e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-52j2b?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: > pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:20:32.847519 master-0 kubenswrapper[7385]: E0319 09:20:32.847465 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-7b95f86987-52j2b_openshift-operator-lifecycle-manager(e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-7b95f86987-52j2b_openshift-operator-lifecycle-manager(e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-52j2b_openshift-operator-lifecycle-manager_e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc_0(738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-52j2b to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a\\\" Netns:\\\"/var/run/netns/a50445df-4ba3-4907-8d20-b4f8f5c3809b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-52j2b;K8S_POD_INFRA_CONTAINER_ID=738ad1a21f72e85fc6a0146944082e0214156151192841ee249154ea36accd0a;K8S_POD_UID=e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-52j2b in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-52j2b?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" podUID="e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" Mar 19 09:20:33.075811 master-0 kubenswrapper[7385]: E0319 09:20:33.075757 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.075811 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a_0(922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721): error adding pod openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721" Netns:"/var/run/netns/c75e38e4-a126-4074-9ad0-8e4f9e6832bb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-66b84d69b-vfnhd;K8S_POD_INFRA_CONTAINER_ID=922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721;K8S_POD_UID=8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd] networking: Multus: [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods ingress-operator-66b84d69b-vfnhd) Mar 19 09:20:33.075811 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.075811 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: E0319 09:20:33.075825 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a_0(922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721): error adding pod openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721" Netns:"/var/run/netns/c75e38e4-a126-4074-9ad0-8e4f9e6832bb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-66b84d69b-vfnhd;K8S_POD_INFRA_CONTAINER_ID=922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721;K8S_POD_UID=8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd] networking: Multus: [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods ingress-operator-66b84d69b-vfnhd) Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: > pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: E0319 09:20:33.075847 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a_0(922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721): error adding pod openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721" Netns:"/var/run/netns/c75e38e4-a126-4074-9ad0-8e4f9e6832bb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-66b84d69b-vfnhd;K8S_POD_INFRA_CONTAINER_ID=922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721;K8S_POD_UID=8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Path:"" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd] networking: Multus: [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods ingress-operator-66b84d69b-vfnhd) Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: > pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:20:33.076020 master-0 kubenswrapper[7385]: E0319 09:20:33.075897 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a_0(922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721): error adding pod openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721\\\" Netns:\\\"/var/run/netns/c75e38e4-a126-4074-9ad0-8e4f9e6832bb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-ingress-operator;K8S_POD_NAME=ingress-operator-66b84d69b-vfnhd;K8S_POD_INFRA_CONTAINER_ID=922404305da54b1088482a143d9d47c4a7dd8027a7d9e867fe2e9d2f6bb94721;K8S_POD_UID=8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd] networking: Multus: [openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: SetNetworkStatus: failed to update the pod ingress-operator-66b84d69b-vfnhd in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods ingress-operator-66b84d69b-vfnhd)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:20:33.087478 master-0 kubenswrapper[7385]: E0319 09:20:33.087409 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.087478 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-6f69995874-sw7cc_openshift-machine-api_3a07456d-2e8e-4e80-a777-d0903ad21f07_0(73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1): error adding pod openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1" Netns:"/var/run/netns/c40c9704-852e-4bc9-b0d3-82b705189f31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-6f69995874-sw7cc;K8S_POD_INFRA_CONTAINER_ID=73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1;K8S_POD_UID=3a07456d-2e8e-4e80-a777-d0903ad21f07" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc/3a07456d-2e8e-4e80-a777-d0903ad21f07]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-6f69995874-sw7cc) Mar 19 09:20:33.087478 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.087478 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: E0319 09:20:33.087492 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-6f69995874-sw7cc_openshift-machine-api_3a07456d-2e8e-4e80-a777-d0903ad21f07_0(73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1): error adding pod openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1" Netns:"/var/run/netns/c40c9704-852e-4bc9-b0d3-82b705189f31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-6f69995874-sw7cc;K8S_POD_INFRA_CONTAINER_ID=73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1;K8S_POD_UID=3a07456d-2e8e-4e80-a777-d0903ad21f07" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc/3a07456d-2e8e-4e80-a777-d0903ad21f07]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-6f69995874-sw7cc) Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: > pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: E0319 09:20:33.087515 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-6f69995874-sw7cc_openshift-machine-api_3a07456d-2e8e-4e80-a777-d0903ad21f07_0(73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1): error adding pod openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1" Netns:"/var/run/netns/c40c9704-852e-4bc9-b0d3-82b705189f31" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-6f69995874-sw7cc;K8S_POD_INFRA_CONTAINER_ID=73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1;K8S_POD_UID=3a07456d-2e8e-4e80-a777-d0903ad21f07" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc/3a07456d-2e8e-4e80-a777-d0903ad21f07]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-6f69995874-sw7cc) Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: > pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:20:33.087705 master-0 kubenswrapper[7385]: E0319 09:20:33.087596 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-baremetal-operator-6f69995874-sw7cc_openshift-machine-api(3a07456d-2e8e-4e80-a777-d0903ad21f07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-baremetal-operator-6f69995874-sw7cc_openshift-machine-api(3a07456d-2e8e-4e80-a777-d0903ad21f07)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-6f69995874-sw7cc_openshift-machine-api_3a07456d-2e8e-4e80-a777-d0903ad21f07_0(73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1): error adding pod openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1\\\" Netns:\\\"/var/run/netns/c40c9704-852e-4bc9-b0d3-82b705189f31\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-6f69995874-sw7cc;K8S_POD_INFRA_CONTAINER_ID=73d7911282e661faae2ef6d771553f23a55f92eb05625c01f5454193143f7ba1;K8S_POD_UID=3a07456d-2e8e-4e80-a777-d0903ad21f07\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc/3a07456d-2e8e-4e80-a777-d0903ad21f07]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-6f69995874-sw7cc in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-6f69995874-sw7cc)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" podUID="3a07456d-2e8e-4e80-a777-d0903ad21f07" Mar 19 09:20:33.136449 master-0 kubenswrapper[7385]: E0319 09:20:33.136394 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.136449 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-q4rkm_openshift-multus_3816f149-ddce-41c8-a540-fe866ee71c5e_0(76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-q4rkm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4" Netns:"/var/run/netns/9a04d9db-ca3a-4f07-a02b-03beb93f89b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-q4rkm;K8S_POD_INFRA_CONTAINER_ID=76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4;K8S_POD_UID=3816f149-ddce-41c8-a540-fe866ee71c5e" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm/3816f149-ddce-41c8-a540-fe866ee71c5e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-q4rkm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.136449 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.136449 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: E0319 09:20:33.136468 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-q4rkm_openshift-multus_3816f149-ddce-41c8-a540-fe866ee71c5e_0(76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-q4rkm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4" Netns:"/var/run/netns/9a04d9db-ca3a-4f07-a02b-03beb93f89b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-q4rkm;K8S_POD_INFRA_CONTAINER_ID=76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4;K8S_POD_UID=3816f149-ddce-41c8-a540-fe866ee71c5e" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm/3816f149-ddce-41c8-a540-fe866ee71c5e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-q4rkm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: > pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: E0319 09:20:33.136495 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-q4rkm_openshift-multus_3816f149-ddce-41c8-a540-fe866ee71c5e_0(76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-q4rkm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4" Netns:"/var/run/netns/9a04d9db-ca3a-4f07-a02b-03beb93f89b1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-q4rkm;K8S_POD_INFRA_CONTAINER_ID=76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4;K8S_POD_UID=3816f149-ddce-41c8-a540-fe866ee71c5e" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm/3816f149-ddce-41c8-a540-fe866ee71c5e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-q4rkm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: > pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:20:33.136673 master-0 kubenswrapper[7385]: E0319 09:20:33.136575 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"multus-admission-controller-5dbbb8b86f-q4rkm_openshift-multus(3816f149-ddce-41c8-a540-fe866ee71c5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"multus-admission-controller-5dbbb8b86f-q4rkm_openshift-multus(3816f149-ddce-41c8-a540-fe866ee71c5e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-q4rkm_openshift-multus_3816f149-ddce-41c8-a540-fe866ee71c5e_0(76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-q4rkm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4\\\" Netns:\\\"/var/run/netns/9a04d9db-ca3a-4f07-a02b-03beb93f89b1\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-q4rkm;K8S_POD_INFRA_CONTAINER_ID=76cb3577f24a3cb8b08ce9cda8062c2c88399dcf1d93e0a4d42c905ae75ba5a4;K8S_POD_UID=3816f149-ddce-41c8-a540-fe866ee71c5e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm/3816f149-ddce-41c8-a540-fe866ee71c5e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-q4rkm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-q4rkm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" Mar 19 09:20:33.305902 master-0 kubenswrapper[7385]: E0319 09:20:33.305839 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.305902 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-tlmxr_openshift-operator-lifecycle-manager_211d123b-829c-49dd-b119-e172cab607cf_0(4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-tlmxr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915" Netns:"/var/run/netns/bf463711-ecb4-48fc-b8b7-f186507afcb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-tlmxr;K8S_POD_INFRA_CONTAINER_ID=4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915;K8S_POD_UID=211d123b-829c-49dd-b119-e172cab607cf" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr/211d123b-829c-49dd-b119-e172cab607cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-tlmxr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.305902 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.305902 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: E0319 09:20:33.305918 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-tlmxr_openshift-operator-lifecycle-manager_211d123b-829c-49dd-b119-e172cab607cf_0(4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-tlmxr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915" Netns:"/var/run/netns/bf463711-ecb4-48fc-b8b7-f186507afcb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-tlmxr;K8S_POD_INFRA_CONTAINER_ID=4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915;K8S_POD_UID=211d123b-829c-49dd-b119-e172cab607cf" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr/211d123b-829c-49dd-b119-e172cab607cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-tlmxr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: > pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: E0319 09:20:33.305953 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-tlmxr_openshift-operator-lifecycle-manager_211d123b-829c-49dd-b119-e172cab607cf_0(4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-tlmxr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915" Netns:"/var/run/netns/bf463711-ecb4-48fc-b8b7-f186507afcb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-tlmxr;K8S_POD_INFRA_CONTAINER_ID=4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915;K8S_POD_UID=211d123b-829c-49dd-b119-e172cab607cf" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr/211d123b-829c-49dd-b119-e172cab607cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-tlmxr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: > pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:20:33.306065 master-0 kubenswrapper[7385]: E0319 09:20:33.306014 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"catalog-operator-68f85b4d6c-tlmxr_openshift-operator-lifecycle-manager(211d123b-829c-49dd-b119-e172cab607cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"catalog-operator-68f85b4d6c-tlmxr_openshift-operator-lifecycle-manager(211d123b-829c-49dd-b119-e172cab607cf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-tlmxr_openshift-operator-lifecycle-manager_211d123b-829c-49dd-b119-e172cab607cf_0(4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-tlmxr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915\\\" Netns:\\\"/var/run/netns/bf463711-ecb4-48fc-b8b7-f186507afcb4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-tlmxr;K8S_POD_INFRA_CONTAINER_ID=4a2670fb7754e902effb0de81503533121270a44abd2d9098936353c6d00d915;K8S_POD_UID=211d123b-829c-49dd-b119-e172cab607cf\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr/211d123b-829c-49dd-b119-e172cab607cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-tlmxr in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-tlmxr?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" podUID="211d123b-829c-49dd-b119-e172cab607cf" Mar 19 09:20:33.446818 master-0 kubenswrapper[7385]: E0319 09:20:33.446735 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.446818 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-lflg7_openshift-multus_bff5aeea-f859-4e38-bf1c-9e730025c212_0(47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9): error adding pod openshift-multus_network-metrics-daemon-lflg7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9" Netns:"/var/run/netns/48abe060-a641-4590-94d3-544a0b058834" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-lflg7;K8S_POD_INFRA_CONTAINER_ID=47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9;K8S_POD_UID=bff5aeea-f859-4e38-bf1c-9e730025c212" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-lflg7] networking: Multus: [openshift-multus/network-metrics-daemon-lflg7/bff5aeea-f859-4e38-bf1c-9e730025c212]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-lflg7?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.446818 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.446818 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: E0319 09:20:33.446835 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-lflg7_openshift-multus_bff5aeea-f859-4e38-bf1c-9e730025c212_0(47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9): error adding pod openshift-multus_network-metrics-daemon-lflg7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9" Netns:"/var/run/netns/48abe060-a641-4590-94d3-544a0b058834" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-lflg7;K8S_POD_INFRA_CONTAINER_ID=47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9;K8S_POD_UID=bff5aeea-f859-4e38-bf1c-9e730025c212" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-lflg7] networking: Multus: [openshift-multus/network-metrics-daemon-lflg7/bff5aeea-f859-4e38-bf1c-9e730025c212]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-lflg7?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: > pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: E0319 09:20:33.446860 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-lflg7_openshift-multus_bff5aeea-f859-4e38-bf1c-9e730025c212_0(47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9): error adding pod openshift-multus_network-metrics-daemon-lflg7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9" Netns:"/var/run/netns/48abe060-a641-4590-94d3-544a0b058834" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-lflg7;K8S_POD_INFRA_CONTAINER_ID=47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9;K8S_POD_UID=bff5aeea-f859-4e38-bf1c-9e730025c212" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-lflg7] networking: Multus: [openshift-multus/network-metrics-daemon-lflg7/bff5aeea-f859-4e38-bf1c-9e730025c212]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-lflg7?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: > pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:20:33.447016 master-0 kubenswrapper[7385]: E0319 09:20:33.446926 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-lflg7_openshift-multus(bff5aeea-f859-4e38-bf1c-9e730025c212)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-lflg7_openshift-multus(bff5aeea-f859-4e38-bf1c-9e730025c212)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-lflg7_openshift-multus_bff5aeea-f859-4e38-bf1c-9e730025c212_0(47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9): error adding pod openshift-multus_network-metrics-daemon-lflg7 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9\\\" Netns:\\\"/var/run/netns/48abe060-a641-4590-94d3-544a0b058834\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-lflg7;K8S_POD_INFRA_CONTAINER_ID=47e2284a0288f4700e7443ba7633fa6b2b213e6a7ed888bcf0c9c3eec3018bb9;K8S_POD_UID=bff5aeea-f859-4e38-bf1c-9e730025c212\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-lflg7] networking: Multus: [openshift-multus/network-metrics-daemon-lflg7/bff5aeea-f859-4e38-bf1c-9e730025c212]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-lflg7 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-lflg7?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/network-metrics-daemon-lflg7" podUID="bff5aeea-f859-4e38-bf1c-9e730025c212" Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: E0319 09:20:33.447176 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-wptdb_openshift-monitoring_676f4062-ea34-48d0-80d7-3cd3d9da341e_0(0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-wptdb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59" Netns:"/var/run/netns/9ba66bd4-2839-463b-892f-cb4b83cf90e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-wptdb;K8S_POD_INFRA_CONTAINER_ID=0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59;K8S_POD_UID=676f4062-ea34-48d0-80d7-3cd3d9da341e" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb/676f4062-ea34-48d0-80d7-3cd3d9da341e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-wptdb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: E0319 09:20:33.447231 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-wptdb_openshift-monitoring_676f4062-ea34-48d0-80d7-3cd3d9da341e_0(0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-wptdb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59" Netns:"/var/run/netns/9ba66bd4-2839-463b-892f-cb4b83cf90e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-wptdb;K8S_POD_INFRA_CONTAINER_ID=0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59;K8S_POD_UID=676f4062-ea34-48d0-80d7-3cd3d9da341e" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb/676f4062-ea34-48d0-80d7-3cd3d9da341e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-wptdb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: > pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: E0319 09:20:33.447252 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-wptdb_openshift-monitoring_676f4062-ea34-48d0-80d7-3cd3d9da341e_0(0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-wptdb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59" Netns:"/var/run/netns/9ba66bd4-2839-463b-892f-cb4b83cf90e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-wptdb;K8S_POD_INFRA_CONTAINER_ID=0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59;K8S_POD_UID=676f4062-ea34-48d0-80d7-3cd3d9da341e" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb/676f4062-ea34-48d0-80d7-3cd3d9da341e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-wptdb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.447273 master-0 kubenswrapper[7385]: > pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:20:33.447630 master-0 kubenswrapper[7385]: E0319 09:20:33.447306 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-monitoring-operator-58845fbb57-wptdb_openshift-monitoring(676f4062-ea34-48d0-80d7-3cd3d9da341e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-monitoring-operator-58845fbb57-wptdb_openshift-monitoring(676f4062-ea34-48d0-80d7-3cd3d9da341e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-wptdb_openshift-monitoring_676f4062-ea34-48d0-80d7-3cd3d9da341e_0(0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-wptdb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59\\\" Netns:\\\"/var/run/netns/9ba66bd4-2839-463b-892f-cb4b83cf90e8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-wptdb;K8S_POD_INFRA_CONTAINER_ID=0cb74f2fb74625525080861f57eece07d2a0f681e55c6957fed6a737bd757a59;K8S_POD_UID=676f4062-ea34-48d0-80d7-3cd3d9da341e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb/676f4062-ea34-48d0-80d7-3cd3d9da341e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-wptdb in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-wptdb?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" podUID="676f4062-ea34-48d0-80d7-3cd3d9da341e" Mar 19 09:20:33.483434 master-0 kubenswrapper[7385]: E0319 09:20:33.483377 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.483434 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-5549dc66cb-nc9rw_openshift-image-registry_d6cd2eac-6412-4f38-8272-743c67b218a3_0(2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef): error adding pod openshift-image-registry_cluster-image-registry-operator-5549dc66cb-nc9rw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef" Netns:"/var/run/netns/a8e936e8-d9c0-40f9-87cb-0589e49267cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-5549dc66cb-nc9rw;K8S_POD_INFRA_CONTAINER_ID=2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef;K8S_POD_UID=d6cd2eac-6412-4f38-8272-743c67b218a3" Path:"" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw/d6cd2eac-6412-4f38-8272-743c67b218a3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-5549dc66cb-nc9rw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.483434 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.483434 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: E0319 09:20:33.483449 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-5549dc66cb-nc9rw_openshift-image-registry_d6cd2eac-6412-4f38-8272-743c67b218a3_0(2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef): error adding pod openshift-image-registry_cluster-image-registry-operator-5549dc66cb-nc9rw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef" Netns:"/var/run/netns/a8e936e8-d9c0-40f9-87cb-0589e49267cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-5549dc66cb-nc9rw;K8S_POD_INFRA_CONTAINER_ID=2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef;K8S_POD_UID=d6cd2eac-6412-4f38-8272-743c67b218a3" Path:"" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw/d6cd2eac-6412-4f38-8272-743c67b218a3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-5549dc66cb-nc9rw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: > pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: E0319 09:20:33.483470 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-5549dc66cb-nc9rw_openshift-image-registry_d6cd2eac-6412-4f38-8272-743c67b218a3_0(2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef): error adding pod openshift-image-registry_cluster-image-registry-operator-5549dc66cb-nc9rw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef" Netns:"/var/run/netns/a8e936e8-d9c0-40f9-87cb-0589e49267cc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-5549dc66cb-nc9rw;K8S_POD_INFRA_CONTAINER_ID=2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef;K8S_POD_UID=d6cd2eac-6412-4f38-8272-743c67b218a3" Path:"" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw/d6cd2eac-6412-4f38-8272-743c67b218a3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-5549dc66cb-nc9rw?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: > pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:20:33.483636 master-0 kubenswrapper[7385]: E0319 09:20:33.483538 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-image-registry-operator-5549dc66cb-nc9rw_openshift-image-registry(d6cd2eac-6412-4f38-8272-743c67b218a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-image-registry-operator-5549dc66cb-nc9rw_openshift-image-registry(d6cd2eac-6412-4f38-8272-743c67b218a3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-image-registry-operator-5549dc66cb-nc9rw_openshift-image-registry_d6cd2eac-6412-4f38-8272-743c67b218a3_0(2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef): error adding pod openshift-image-registry_cluster-image-registry-operator-5549dc66cb-nc9rw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef\\\" Netns:\\\"/var/run/netns/a8e936e8-d9c0-40f9-87cb-0589e49267cc\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-image-registry;K8S_POD_NAME=cluster-image-registry-operator-5549dc66cb-nc9rw;K8S_POD_INFRA_CONTAINER_ID=2e4f4222214d713a5a38cfff77694d3809fa28edd5ea749dcc103bff66f3c5ef;K8S_POD_UID=d6cd2eac-6412-4f38-8272-743c67b218a3\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw] networking: Multus: [openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw/d6cd2eac-6412-4f38-8272-743c67b218a3]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-image-registry-operator-5549dc66cb-nc9rw in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-5549dc66cb-nc9rw?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" podUID="d6cd2eac-6412-4f38-8272-743c67b218a3" Mar 19 09:20:33.507002 master-0 kubenswrapper[7385]: E0319 09:20:33.506950 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.507002 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-stct6_openshift-marketplace_58fbf09a-3a26-45ab-8496-11d05c27e9cf_0(592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-stct6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a" Netns:"/var/run/netns/67034341-f0cc-4fc6-a631-bc0db57eeedf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-stct6;K8S_POD_INFRA_CONTAINER_ID=592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a;K8S_POD_UID=58fbf09a-3a26-45ab-8496-11d05c27e9cf" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-stct6] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-stct6/58fbf09a-3a26-45ab-8496-11d05c27e9cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-stct6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.507002 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.507002 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: E0319 09:20:33.507024 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-stct6_openshift-marketplace_58fbf09a-3a26-45ab-8496-11d05c27e9cf_0(592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-stct6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a" Netns:"/var/run/netns/67034341-f0cc-4fc6-a631-bc0db57eeedf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-stct6;K8S_POD_INFRA_CONTAINER_ID=592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a;K8S_POD_UID=58fbf09a-3a26-45ab-8496-11d05c27e9cf" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-stct6] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-stct6/58fbf09a-3a26-45ab-8496-11d05c27e9cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-stct6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: > pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: E0319 09:20:33.507082 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-stct6_openshift-marketplace_58fbf09a-3a26-45ab-8496-11d05c27e9cf_0(592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-stct6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a" Netns:"/var/run/netns/67034341-f0cc-4fc6-a631-bc0db57eeedf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-stct6;K8S_POD_INFRA_CONTAINER_ID=592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a;K8S_POD_UID=58fbf09a-3a26-45ab-8496-11d05c27e9cf" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-stct6] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-stct6/58fbf09a-3a26-45ab-8496-11d05c27e9cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-stct6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: > pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:20:33.507207 master-0 kubenswrapper[7385]: E0319 09:20:33.507164 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-89ccd998f-stct6_openshift-marketplace(58fbf09a-3a26-45ab-8496-11d05c27e9cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-89ccd998f-stct6_openshift-marketplace(58fbf09a-3a26-45ab-8496-11d05c27e9cf)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-stct6_openshift-marketplace_58fbf09a-3a26-45ab-8496-11d05c27e9cf_0(592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-stct6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a\\\" Netns:\\\"/var/run/netns/67034341-f0cc-4fc6-a631-bc0db57eeedf\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-stct6;K8S_POD_INFRA_CONTAINER_ID=592491f0da199ad830f1662fea6af30da6ff6f9713d0204dda7d24d58b31c95a;K8S_POD_UID=58fbf09a-3a26-45ab-8496-11d05c27e9cf\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-stct6] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-stct6/58fbf09a-3a26-45ab-8496-11d05c27e9cf]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-stct6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-stct6?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" podUID="58fbf09a-3a26-45ab-8496-11d05c27e9cf" Mar 19 09:20:33.526015 master-0 kubenswrapper[7385]: E0319 09:20:33.525969 7385 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 19 09:20:33.526015 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-fts6w_openshift-operator-lifecycle-manager_e25a16f3-dfe0-49c5-a31d-e310d369f406_0(16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-fts6w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d" Netns:"/var/run/netns/1d42b05f-1f00-4b36-9ef8-5c4ea81270cd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-fts6w;K8S_POD_INFRA_CONTAINER_ID=16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d;K8S_POD_UID=e25a16f3-dfe0-49c5-a31d-e310d369f406" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w/e25a16f3-dfe0-49c5-a31d-e310d369f406]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-fts6w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.526015 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.526015 master-0 kubenswrapper[7385]: > Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: E0319 09:20:33.526048 7385 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-fts6w_openshift-operator-lifecycle-manager_e25a16f3-dfe0-49c5-a31d-e310d369f406_0(16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-fts6w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d" Netns:"/var/run/netns/1d42b05f-1f00-4b36-9ef8-5c4ea81270cd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-fts6w;K8S_POD_INFRA_CONTAINER_ID=16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d;K8S_POD_UID=e25a16f3-dfe0-49c5-a31d-e310d369f406" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w/e25a16f3-dfe0-49c5-a31d-e310d369f406]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-fts6w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: > pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: E0319 09:20:33.526083 7385 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-fts6w_openshift-operator-lifecycle-manager_e25a16f3-dfe0-49c5-a31d-e310d369f406_0(16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-fts6w to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d" Netns:"/var/run/netns/1d42b05f-1f00-4b36-9ef8-5c4ea81270cd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-fts6w;K8S_POD_INFRA_CONTAINER_ID=16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d;K8S_POD_UID=e25a16f3-dfe0-49c5-a31d-e310d369f406" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w/e25a16f3-dfe0-49c5-a31d-e310d369f406]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-fts6w?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 19 09:20:33.526206 master-0 kubenswrapper[7385]: > pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:20:33.526428 master-0 kubenswrapper[7385]: E0319 09:20:33.526184 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"olm-operator-5c9796789-fts6w_openshift-operator-lifecycle-manager(e25a16f3-dfe0-49c5-a31d-e310d369f406)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"olm-operator-5c9796789-fts6w_openshift-operator-lifecycle-manager(e25a16f3-dfe0-49c5-a31d-e310d369f406)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-fts6w_openshift-operator-lifecycle-manager_e25a16f3-dfe0-49c5-a31d-e310d369f406_0(16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-fts6w to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d\\\" Netns:\\\"/var/run/netns/1d42b05f-1f00-4b36-9ef8-5c4ea81270cd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-fts6w;K8S_POD_INFRA_CONTAINER_ID=16b03abc7a368009b26f5a41a3938f4bc476c2b9fd3fd5166b49760d1c2b854d;K8S_POD_UID=e25a16f3-dfe0-49c5-a31d-e310d369f406\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w/e25a16f3-dfe0-49c5-a31d-e310d369f406]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-fts6w in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-fts6w?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" podUID="e25a16f3-dfe0-49c5-a31d-e310d369f406" Mar 19 09:20:35.134719 master-0 kubenswrapper[7385]: I0319 09:20:35.134664 7385 generic.go:334] "Generic (PLEG): container finished" podID="a67ae8dc-240d-4708-9139-1d49c601e552" containerID="69c48f90f075a2cd2e8836a6c9cf1524c6d05160f72475eb6e7ea35e49cf68db" exitCode=0 Mar 19 09:20:36.053225 master-0 kubenswrapper[7385]: E0319 09:20:36.053157 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:36.053225 master-0 kubenswrapper[7385]: E0319 09:20:36.053187 7385 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 19 09:20:36.540377 master-0 kubenswrapper[7385]: E0319 09:20:36.540314 7385 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:20:36.540928 master-0 kubenswrapper[7385]: E0319 09:20:36.540507 7385 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.011s" Mar 19 09:20:36.540928 master-0 kubenswrapper[7385]: I0319 09:20:36.540538 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:20:36.551843 master-0 kubenswrapper[7385]: I0319 09:20:36.551789 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"58961d9e0be486e46715cb6bf5872c6474bbf247fb8ed12ba8931d59b7f9e590"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 19 09:20:36.552036 master-0 kubenswrapper[7385]: I0319 09:20:36.551884 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://58961d9e0be486e46715cb6bf5872c6474bbf247fb8ed12ba8931d59b7f9e590" gracePeriod=30 Mar 19 09:20:36.552147 master-0 kubenswrapper[7385]: I0319 09:20:36.552112 7385 scope.go:117] "RemoveContainer" containerID="9b3fc8a626e0487acce62c5d3181f8201f7287976a42754235b1309dbd2babb2" Mar 19 09:20:36.559675 master-0 kubenswrapper[7385]: I0319 09:20:36.557194 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 09:20:36.706885 master-0 kubenswrapper[7385]: E0319 09:20:36.706416 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 19 09:20:37.144574 master-0 kubenswrapper[7385]: I0319 09:20:37.144480 7385 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="58961d9e0be486e46715cb6bf5872c6474bbf247fb8ed12ba8931d59b7f9e590" exitCode=2 Mar 19 09:20:37.146043 master-0 kubenswrapper[7385]: I0319 09:20:37.146012 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-t7zwh_47da8964-3606-4181-87fb-8f04a3065295/approver/0.log" Mar 19 09:20:37.477055 master-0 kubenswrapper[7385]: I0319 09:20:37.476976 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ec98e408-a574-40eb-b84d-111edbaab81a/installer/0.log" Mar 19 09:20:37.477239 master-0 kubenswrapper[7385]: I0319 09:20:37.477085 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:20:37.511742 master-0 kubenswrapper[7385]: I0319 09:20:37.511706 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_20ba861f-a073-4d60-9136-041c2e98dd0f/installer/0.log" Mar 19 09:20:37.511878 master-0 kubenswrapper[7385]: I0319 09:20:37.511772 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:20:37.604391 master-0 kubenswrapper[7385]: I0319 09:20:37.604319 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-kubelet-dir\") pod \"20ba861f-a073-4d60-9136-041c2e98dd0f\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " Mar 19 09:20:37.604985 master-0 kubenswrapper[7385]: I0319 09:20:37.604413 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec98e408-a574-40eb-b84d-111edbaab81a-kube-api-access\") pod \"ec98e408-a574-40eb-b84d-111edbaab81a\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " Mar 19 09:20:37.604985 master-0 kubenswrapper[7385]: I0319 09:20:37.604457 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-var-lock\") pod \"20ba861f-a073-4d60-9136-041c2e98dd0f\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " Mar 19 09:20:37.604985 master-0 kubenswrapper[7385]: I0319 09:20:37.604474 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-var-lock\") pod \"ec98e408-a574-40eb-b84d-111edbaab81a\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " Mar 19 09:20:37.604985 master-0 kubenswrapper[7385]: I0319 09:20:37.604493 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-kubelet-dir\") pod \"ec98e408-a574-40eb-b84d-111edbaab81a\" (UID: \"ec98e408-a574-40eb-b84d-111edbaab81a\") " Mar 19 09:20:37.604985 master-0 kubenswrapper[7385]: I0319 09:20:37.604529 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ba861f-a073-4d60-9136-041c2e98dd0f-kube-api-access\") pod \"20ba861f-a073-4d60-9136-041c2e98dd0f\" (UID: \"20ba861f-a073-4d60-9136-041c2e98dd0f\") " Mar 19 09:20:37.604985 master-0 kubenswrapper[7385]: I0319 09:20:37.604752 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-var-lock" (OuterVolumeSpecName: "var-lock") pod "20ba861f-a073-4d60-9136-041c2e98dd0f" (UID: "20ba861f-a073-4d60-9136-041c2e98dd0f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:20:37.604985 master-0 kubenswrapper[7385]: I0319 09:20:37.604815 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "20ba861f-a073-4d60-9136-041c2e98dd0f" (UID: "20ba861f-a073-4d60-9136-041c2e98dd0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:20:37.605412 master-0 kubenswrapper[7385]: I0319 09:20:37.605341 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-var-lock" (OuterVolumeSpecName: "var-lock") pod "ec98e408-a574-40eb-b84d-111edbaab81a" (UID: "ec98e408-a574-40eb-b84d-111edbaab81a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:20:37.605412 master-0 kubenswrapper[7385]: I0319 09:20:37.605375 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ec98e408-a574-40eb-b84d-111edbaab81a" (UID: "ec98e408-a574-40eb-b84d-111edbaab81a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:20:37.607629 master-0 kubenswrapper[7385]: I0319 09:20:37.607592 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ba861f-a073-4d60-9136-041c2e98dd0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "20ba861f-a073-4d60-9136-041c2e98dd0f" (UID: "20ba861f-a073-4d60-9136-041c2e98dd0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:20:37.615701 master-0 kubenswrapper[7385]: I0319 09:20:37.615656 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec98e408-a574-40eb-b84d-111edbaab81a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ec98e408-a574-40eb-b84d-111edbaab81a" (UID: "ec98e408-a574-40eb-b84d-111edbaab81a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:20:37.705481 master-0 kubenswrapper[7385]: I0319 09:20:37.705411 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec98e408-a574-40eb-b84d-111edbaab81a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:37.705481 master-0 kubenswrapper[7385]: I0319 09:20:37.705447 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:37.705481 master-0 kubenswrapper[7385]: I0319 09:20:37.705457 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:37.705481 master-0 kubenswrapper[7385]: I0319 09:20:37.705466 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec98e408-a574-40eb-b84d-111edbaab81a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:37.705481 master-0 kubenswrapper[7385]: I0319 09:20:37.705476 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20ba861f-a073-4d60-9136-041c2e98dd0f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:37.705481 master-0 kubenswrapper[7385]: I0319 09:20:37.705484 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20ba861f-a073-4d60-9136-041c2e98dd0f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:20:38.152853 master-0 kubenswrapper[7385]: I0319 09:20:38.152804 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_20ba861f-a073-4d60-9136-041c2e98dd0f/installer/0.log" Mar 19 09:20:38.153195 master-0 kubenswrapper[7385]: I0319 09:20:38.152994 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:20:38.154923 master-0 kubenswrapper[7385]: I0319 09:20:38.154890 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ec98e408-a574-40eb-b84d-111edbaab81a/installer/0.log" Mar 19 09:20:38.155022 master-0 kubenswrapper[7385]: I0319 09:20:38.154998 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:20:39.264792 master-0 kubenswrapper[7385]: E0319 09:20:39.264650 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189e338d197ae844 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:19:44.341444676 +0000 UTC m=+80.015874387,LastTimestamp:2026-03-19 09:19:44.341444676 +0000 UTC m=+80.015874387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:20:39.540107 master-0 kubenswrapper[7385]: I0319 09:20:39.539982 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:39.540107 master-0 kubenswrapper[7385]: I0319 09:20:39.540077 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:39.831168 master-0 kubenswrapper[7385]: I0319 09:20:39.830987 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:39.831168 master-0 kubenswrapper[7385]: I0319 09:20:39.831078 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:42.539576 master-0 kubenswrapper[7385]: I0319 09:20:42.539473 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:42.539576 master-0 kubenswrapper[7385]: I0319 09:20:42.539535 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:42.830712 master-0 kubenswrapper[7385]: I0319 09:20:42.830491 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:42.830712 master-0 kubenswrapper[7385]: I0319 09:20:42.830595 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:45.397706 master-0 kubenswrapper[7385]: I0319 09:20:45.397624 7385 status_manager.go:851] "Failed to get status for pod" podUID="46f265536aba6292ead501bc9b49f327" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Mar 19 09:20:45.539898 master-0 kubenswrapper[7385]: I0319 09:20:45.539823 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:45.539898 master-0 kubenswrapper[7385]: I0319 09:20:45.539892 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:46.117203 master-0 kubenswrapper[7385]: I0319 09:20:46.117142 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:46.117487 master-0 kubenswrapper[7385]: I0319 09:20:46.117212 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:46.908323 master-0 kubenswrapper[7385]: E0319 09:20:46.908254 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 19 09:20:48.539058 master-0 kubenswrapper[7385]: I0319 09:20:48.538987 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:48.539058 master-0 kubenswrapper[7385]: I0319 09:20:48.539055 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:50.672808 master-0 kubenswrapper[7385]: I0319 09:20:50.672718 7385 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-z9khh container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 19 09:20:50.673768 master-0 kubenswrapper[7385]: I0319 09:20:50.672815 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" podUID="fe1881fb-c670-442a-a092-c1eee6b7d5e5" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 19 09:20:51.539522 master-0 kubenswrapper[7385]: I0319 09:20:51.539431 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:51.539522 master-0 kubenswrapper[7385]: I0319 09:20:51.539506 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:52.501851 master-0 kubenswrapper[7385]: I0319 09:20:52.501772 7385 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-cfmgj container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 19 09:20:52.502338 master-0 kubenswrapper[7385]: I0319 09:20:52.501869 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" podUID="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 19 09:20:54.539451 master-0 kubenswrapper[7385]: I0319 09:20:54.539410 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:54.540132 master-0 kubenswrapper[7385]: I0319 09:20:54.540095 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:20:56.429203 master-0 kubenswrapper[7385]: E0319 09:20:56.428971 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:20:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:20:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:20:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-19T09:20:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27\\\"],\\\"sizeBytes\\\":504662731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:20:57.309457 master-0 kubenswrapper[7385]: E0319 09:20:57.309363 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 19 09:20:57.539569 master-0 kubenswrapper[7385]: I0319 09:20:57.539465 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:20:57.539569 master-0 kubenswrapper[7385]: I0319 09:20:57.539528 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:00.539888 master-0 kubenswrapper[7385]: I0319 09:21:00.539821 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:00.540635 master-0 kubenswrapper[7385]: I0319 09:21:00.539900 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:00.672517 master-0 kubenswrapper[7385]: I0319 09:21:00.672423 7385 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-z9khh container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 19 09:21:00.672758 master-0 kubenswrapper[7385]: I0319 09:21:00.672530 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" podUID="fe1881fb-c670-442a-a092-c1eee6b7d5e5" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 19 09:21:03.539432 master-0 kubenswrapper[7385]: I0319 09:21:03.539369 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:03.540331 master-0 kubenswrapper[7385]: I0319 09:21:03.539745 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:06.429670 master-0 kubenswrapper[7385]: E0319 09:21:06.429525 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:06.539738 master-0 kubenswrapper[7385]: I0319 09:21:06.539657 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:06.539975 master-0 kubenswrapper[7385]: I0319 09:21:06.539739 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:08.111422 master-0 kubenswrapper[7385]: E0319 09:21:08.111361 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 19 09:21:09.540111 master-0 kubenswrapper[7385]: I0319 09:21:09.540009 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:09.540111 master-0 kubenswrapper[7385]: I0319 09:21:09.540080 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:10.560189 master-0 kubenswrapper[7385]: E0319 09:21:10.560125 7385 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 19 09:21:10.560656 master-0 kubenswrapper[7385]: E0319 09:21:10.560289 7385 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Mar 19 09:21:10.566426 master-0 kubenswrapper[7385]: I0319 09:21:10.566389 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 09:21:10.672691 master-0 kubenswrapper[7385]: I0319 09:21:10.672514 7385 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-z9khh container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Mar 19 09:21:10.672691 master-0 kubenswrapper[7385]: I0319 09:21:10.672684 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" podUID="fe1881fb-c670-442a-a092-c1eee6b7d5e5" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Mar 19 09:21:11.321410 master-0 kubenswrapper[7385]: I0319 09:21:11.321278 7385 generic.go:334] "Generic (PLEG): container finished" podID="525b41b5-82d8-4d47-8350-79644a2c9360" containerID="24b10bdbe30c7b6a34e02317c7a4fad144a2b0ece63d82300dc1de99318fd6fe" exitCode=0 Mar 19 09:21:11.323154 master-0 kubenswrapper[7385]: I0319 09:21:11.323108 7385 generic.go:334] "Generic (PLEG): container finished" podID="17e0cb4a-e776-4886-927e-ae446af7f234" containerID="c30f2036341c158a4a311a14ce582436d41a1a42842791b6c421ca4a779f1492" exitCode=0 Mar 19 09:21:12.539967 master-0 kubenswrapper[7385]: I0319 09:21:12.539903 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:12.539967 master-0 kubenswrapper[7385]: I0319 09:21:12.539958 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:13.268325 master-0 kubenswrapper[7385]: E0319 09:21:13.268132 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189e338d2882fc53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:19:44.593632339 +0000 UTC m=+80.268062060,LastTimestamp:2026-03-19 09:19:44.593632339 +0000 UTC m=+80.268062060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:21:13.333620 master-0 kubenswrapper[7385]: I0319 09:21:13.333527 7385 generic.go:334] "Generic (PLEG): container finished" podID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerID="8c68ece13612c392b8986c6036f0fb5686c420aa3d85d8318f1363a956c12d2e" exitCode=0 Mar 19 09:21:14.339658 master-0 kubenswrapper[7385]: I0319 09:21:14.339525 7385 generic.go:334] "Generic (PLEG): container finished" podID="70258988-8374-4aee-aaa2-be3c2e853062" containerID="48e3bb33c4cfc2acfda10baf096f5ef90778cf5f988e45ef005dd24496a67e52" exitCode=0 Mar 19 09:21:14.341155 master-0 kubenswrapper[7385]: I0319 09:21:14.341124 7385 generic.go:334] "Generic (PLEG): container finished" podID="53bff8e4-bf60-4386-8905-49d43fd6c420" containerID="3c8b4e82c1555c09e55296bfca35644f6006a9bed8037eabe78692b05714698a" exitCode=0 Mar 19 09:21:15.346949 master-0 kubenswrapper[7385]: I0319 09:21:15.346865 7385 generic.go:334] "Generic (PLEG): container finished" podID="fe1881fb-c670-442a-a092-c1eee6b7d5e5" containerID="68fbf6321802565874265d19454cbc64b4b4b521a0e102ded43536ee428b4258" exitCode=0 Mar 19 09:21:15.539461 master-0 kubenswrapper[7385]: I0319 09:21:15.539353 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:15.539461 master-0 kubenswrapper[7385]: I0319 09:21:15.539452 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:16.429844 master-0 kubenswrapper[7385]: E0319 09:21:16.429793 7385 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 19 09:21:18.539379 master-0 kubenswrapper[7385]: I0319 09:21:18.539315 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:18.540283 master-0 kubenswrapper[7385]: I0319 09:21:18.539388 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:19.365955 master-0 kubenswrapper[7385]: I0319 09:21:19.365921 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/0.log" Mar 19 09:21:19.366219 master-0 kubenswrapper[7385]: I0319 09:21:19.366193 7385 generic.go:334] "Generic (PLEG): container finished" podID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" containerID="8140af4cb4bb09d2ed5ad0f6ec653bbb3dc06a4515b9db389545823579fd212a" exitCode=1 Mar 19 09:21:19.712520 master-0 kubenswrapper[7385]: E0319 09:21:19.712442 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 19 09:21:20.534499 master-0 kubenswrapper[7385]: E0319 09:21:20.534390 7385 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.974s" Mar 19 09:21:20.534499 master-0 kubenswrapper[7385]: I0319 09:21:20.534454 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" event={"ID":"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c","Type":"ContainerDied","Data":"794857861e41452767f8150da770c0fdb6415a1b4c58da2ca5c6bb1b5694eb77"} Mar 19 09:21:20.535086 master-0 kubenswrapper[7385]: I0319 09:21:20.535034 7385 scope.go:117] "RemoveContainer" containerID="794857861e41452767f8150da770c0fdb6415a1b4c58da2ca5c6bb1b5694eb77" Mar 19 09:21:20.548345 master-0 kubenswrapper[7385]: I0319 09:21:20.548138 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 19 09:21:20.550927 master-0 kubenswrapper[7385]: I0319 09:21:20.550871 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e3ab0802-da8a-475c-a707-09f7838f580b","Type":"ContainerDied","Data":"a1c35003004ca85e3194260594ce7980c9cfead4c46c7a6e5e65ede51128fa87"} Mar 19 09:21:20.550927 master-0 kubenswrapper[7385]: I0319 09:21:20.550926 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:21:20.551305 master-0 kubenswrapper[7385]: I0319 09:21:20.551275 7385 scope.go:117] "RemoveContainer" containerID="8c68ece13612c392b8986c6036f0fb5686c420aa3d85d8318f1363a956c12d2e" Mar 19 09:21:20.551372 master-0 kubenswrapper[7385]: I0319 09:21:20.551170 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:21:20.551590 master-0 kubenswrapper[7385]: I0319 09:21:20.551562 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:21:20.551590 master-0 kubenswrapper[7385]: I0319 09:21:20.551591 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:21:20.551704 master-0 kubenswrapper[7385]: I0319 09:21:20.551606 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ec98e408-a574-40eb-b84d-111edbaab81a","Type":"ContainerDied","Data":"bad1a4ade656dc88a2ff2cedf66c5fd93d2a5c35714abd9bee1ca36e672bdec3"} Mar 19 09:21:20.551704 master-0 kubenswrapper[7385]: I0319 09:21:20.551625 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"20ba861f-a073-4d60-9136-041c2e98dd0f","Type":"ContainerDied","Data":"ab53721c199f233bd43c54da36cf0743a555ab62518f114872a0db72d2d2af5a"} Mar 19 09:21:20.551704 master-0 kubenswrapper[7385]: I0319 09:21:20.551640 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"f969fe9873a3954169d30a02594ff223c659b89547ce589e4efba58ec438e923"} Mar 19 09:21:20.551704 master-0 kubenswrapper[7385]: I0319 09:21:20.551658 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" event={"ID":"46c7cde3-2cb4-4fa8-94ca-d5feff877da9","Type":"ContainerDied","Data":"e9208fca3070b80809292873e901e7513b6e0cbe29792fde8a62dcde9ce791be"} Mar 19 09:21:20.551704 master-0 kubenswrapper[7385]: I0319 09:21:20.551672 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" event={"ID":"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4","Type":"ContainerDied","Data":"13d37b6e0fd525b422b8c24e6c520e3e647d99050d3e3d8fce7cd4856511e27f"} Mar 19 09:21:20.551704 master-0 kubenswrapper[7385]: I0319 09:21:20.551689 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-t7zwh" event={"ID":"47da8964-3606-4181-87fb-8f04a3065295","Type":"ContainerDied","Data":"9b3fc8a626e0487acce62c5d3181f8201f7287976a42754235b1309dbd2babb2"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551704 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"577e1cb78b7983d3ec252dc0914c0a0c436d8757170116f9a3b932229b0de3fc"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551728 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"bf41ec4f73d991e650705cd7dc50f09d5379b830fb106a5b2bf29cf8cf16aa01"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551741 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"3ed225a36fa4421795f63a78a99d058f08eb76290885a7395566f826ec754799"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551753 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"cfcf72a5968a35b223ff650bf76501a556c4762493ff456643c088edb64e0ea9"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551764 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"2aa5aa662ffa0437e2fa27777a57474f61a992c00f287dd244d781ce0481e24a"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551776 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" event={"ID":"ca2f7cb3-8812-4fe3-83a5-61668ef87f99","Type":"ContainerDied","Data":"3335c7fc18f5f7e2694a86064d55e2221326f9866ff420531a852d42c29d0c0d"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551791 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" event={"ID":"012cdc1d-ebc8-431e-9a52-9a39de95dd0d","Type":"ContainerDied","Data":"7f84fbd703825db689c03d2baee5e05e0406b0c7857947e23dfe9649aed6fbc3"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551805 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" event={"ID":"a67ae8dc-240d-4708-9139-1d49c601e552","Type":"ContainerDied","Data":"69c48f90f075a2cd2e8836a6c9cf1524c6d05160f72475eb6e7ea35e49cf68db"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551928 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"58961d9e0be486e46715cb6bf5872c6474bbf247fb8ed12ba8931d59b7f9e590"} Mar 19 09:21:20.551980 master-0 kubenswrapper[7385]: I0319 09:21:20.551980 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"56d983e0cc6cd1122ae8e1e8f833654b8157419cc9f034610ad57896ed648267"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.551998 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-t7zwh" event={"ID":"47da8964-3606-4181-87fb-8f04a3065295","Type":"ContainerStarted","Data":"380db29610ce50b23d444ae24a9a82ff721513171d94f5e05240298cc4418dff"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552013 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"20ba861f-a073-4d60-9136-041c2e98dd0f","Type":"ContainerDied","Data":"d9e18fc195cbf5fb27f76f640c42d213bffb004a73cf242e7c9e02beeff1062a"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552025 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e18fc195cbf5fb27f76f640c42d213bffb004a73cf242e7c9e02beeff1062a" Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552038 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ec98e408-a574-40eb-b84d-111edbaab81a","Type":"ContainerDied","Data":"b2d499fdc3d3fa2bc3d6bd17fe41bec26683d20fa2510fec111d840f7bf16b36"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552050 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d499fdc3d3fa2bc3d6bd17fe41bec26683d20fa2510fec111d840f7bf16b36" Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552061 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" event={"ID":"525b41b5-82d8-4d47-8350-79644a2c9360","Type":"ContainerDied","Data":"24b10bdbe30c7b6a34e02317c7a4fad144a2b0ece63d82300dc1de99318fd6fe"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552076 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" event={"ID":"17e0cb4a-e776-4886-927e-ae446af7f234","Type":"ContainerDied","Data":"c30f2036341c158a4a311a14ce582436d41a1a42842791b6c421ca4a779f1492"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552093 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerDied","Data":"8c68ece13612c392b8986c6036f0fb5686c420aa3d85d8318f1363a956c12d2e"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552108 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" event={"ID":"70258988-8374-4aee-aaa2-be3c2e853062","Type":"ContainerDied","Data":"48e3bb33c4cfc2acfda10baf096f5ef90778cf5f988e45ef005dd24496a67e52"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552123 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" event={"ID":"53bff8e4-bf60-4386-8905-49d43fd6c420","Type":"ContainerDied","Data":"3c8b4e82c1555c09e55296bfca35644f6006a9bed8037eabe78692b05714698a"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552139 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" event={"ID":"fe1881fb-c670-442a-a092-c1eee6b7d5e5","Type":"ContainerDied","Data":"68fbf6321802565874265d19454cbc64b4b4b521a0e102ded43536ee428b4258"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552153 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerDied","Data":"8140af4cb4bb09d2ed5ad0f6ec653bbb3dc06a4515b9db389545823579fd212a"} Mar 19 09:21:20.552362 master-0 kubenswrapper[7385]: I0319 09:21:20.552247 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:21:20.553058 master-0 kubenswrapper[7385]: I0319 09:21:20.552391 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:21:20.553058 master-0 kubenswrapper[7385]: I0319 09:21:20.552485 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:21:20.553058 master-0 kubenswrapper[7385]: I0319 09:21:20.553038 7385 scope.go:117] "RemoveContainer" containerID="8140af4cb4bb09d2ed5ad0f6ec653bbb3dc06a4515b9db389545823579fd212a" Mar 19 09:21:20.553841 master-0 kubenswrapper[7385]: I0319 09:21:20.553304 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:21:20.553841 master-0 kubenswrapper[7385]: I0319 09:21:20.553314 7385 scope.go:117] "RemoveContainer" containerID="e936e2d314dab9154842440cf41e00874f26fcc073cf860d24367374f28b489d" Mar 19 09:21:20.553841 master-0 kubenswrapper[7385]: I0319 09:21:20.553379 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:21:20.553841 master-0 kubenswrapper[7385]: I0319 09:21:20.553407 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:21:20.553841 master-0 kubenswrapper[7385]: I0319 09:21:20.553490 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:21:20.553841 master-0 kubenswrapper[7385]: I0319 09:21:20.553569 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.555053 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.556596 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.556970 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.557433 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.557743 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.557803 7385 scope.go:117] "RemoveContainer" containerID="68fbf6321802565874265d19454cbc64b4b4b521a0e102ded43536ee428b4258" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.557935 7385 scope.go:117] "RemoveContainer" containerID="3335c7fc18f5f7e2694a86064d55e2221326f9866ff420531a852d42c29d0c0d" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.558382 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.558712 7385 scope.go:117] "RemoveContainer" containerID="c30f2036341c158a4a311a14ce582436d41a1a42842791b6c421ca4a779f1492" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.559047 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.559068 7385 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="1f43097f-e9f2-4a90-b726-0dafc3f0d40d" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.559471 7385 scope.go:117] "RemoveContainer" containerID="e9208fca3070b80809292873e901e7513b6e0cbe29792fde8a62dcde9ce791be" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.559730 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.560076 7385 scope.go:117] "RemoveContainer" containerID="13d37b6e0fd525b422b8c24e6c520e3e647d99050d3e3d8fce7cd4856511e27f" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.560204 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.560632 7385 scope.go:117] "RemoveContainer" containerID="69c48f90f075a2cd2e8836a6c9cf1524c6d05160f72475eb6e7ea35e49cf68db" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.561598 7385 scope.go:117] "RemoveContainer" containerID="7f84fbd703825db689c03d2baee5e05e0406b0c7857947e23dfe9649aed6fbc3" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.561808 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.561907 7385 scope.go:117] "RemoveContainer" containerID="24b10bdbe30c7b6a34e02317c7a4fad144a2b0ece63d82300dc1de99318fd6fe" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.561999 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.562014 7385 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="1f43097f-e9f2-4a90-b726-0dafc3f0d40d" Mar 19 09:21:20.562095 master-0 kubenswrapper[7385]: I0319 09:21:20.562019 7385 scope.go:117] "RemoveContainer" containerID="48e3bb33c4cfc2acfda10baf096f5ef90778cf5f988e45ef005dd24496a67e52" Mar 19 09:21:20.567589 master-0 kubenswrapper[7385]: I0319 09:21:20.562446 7385 scope.go:117] "RemoveContainer" containerID="3c8b4e82c1555c09e55296bfca35644f6006a9bed8037eabe78692b05714698a" Mar 19 09:21:20.567589 master-0 kubenswrapper[7385]: I0319 09:21:20.562974 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:21:20.567589 master-0 kubenswrapper[7385]: I0319 09:21:20.565311 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:21:20.567589 master-0 kubenswrapper[7385]: I0319 09:21:20.565713 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:21:20.570139 master-0 kubenswrapper[7385]: I0319 09:21:20.570103 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:21:21.030908 master-0 kubenswrapper[7385]: I0319 09:21:21.030785 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lflg7"] Mar 19 09:21:21.108729 master-0 kubenswrapper[7385]: I0319 09:21:21.105625 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w"] Mar 19 09:21:21.173740 master-0 kubenswrapper[7385]: I0319 09:21:21.173093 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-stct6"] Mar 19 09:21:21.285273 master-0 kubenswrapper[7385]: I0319 09:21:21.284229 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw"] Mar 19 09:21:21.379858 master-0 kubenswrapper[7385]: I0319 09:21:21.378098 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr"] Mar 19 09:21:21.412137 master-0 kubenswrapper[7385]: I0319 09:21:21.411755 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72"] Mar 19 09:21:21.425281 master-0 kubenswrapper[7385]: I0319 09:21:21.425251 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" event={"ID":"53bff8e4-bf60-4386-8905-49d43fd6c420","Type":"ContainerStarted","Data":"63daec6a7a54ee857885e15f0afbbf6fb5689d16eaffe329ad8c85a73d06000a"} Mar 19 09:21:21.443663 master-0 kubenswrapper[7385]: I0319 09:21:21.443617 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" event={"ID":"58fbf09a-3a26-45ab-8496-11d05c27e9cf","Type":"ContainerStarted","Data":"ec43fc3d3a5ac191c7efb625569a2dc8960d02c6765df5d0352ccc2d0da0a0a4"} Mar 19 09:21:21.461989 master-0 kubenswrapper[7385]: I0319 09:21:21.458753 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc"] Mar 19 09:21:21.468241 master-0 kubenswrapper[7385]: I0319 09:21:21.468131 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" event={"ID":"17e0cb4a-e776-4886-927e-ae446af7f234","Type":"ContainerStarted","Data":"a362a5eb64fc47d7d2d526a275caf933791760757611bb9ca0864bf110bdb483"} Mar 19 09:21:21.470613 master-0 kubenswrapper[7385]: I0319 09:21:21.470559 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" event={"ID":"e25a16f3-dfe0-49c5-a31d-e310d369f406","Type":"ContainerStarted","Data":"4e41041845987412c5331ff6cc2618d3c5ae42cf3d9f83fd7b71a693c8e76498"} Mar 19 09:21:21.478603 master-0 kubenswrapper[7385]: W0319 09:21:21.478491 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a07456d_2e8e_4e80_a777_d0903ad21f07.slice/crio-819b5de997e19e19a9d977e809d0fb3fdd9648622a344dd4ddd33e56129c529f WatchSource:0}: Error finding container 819b5de997e19e19a9d977e809d0fb3fdd9648622a344dd4ddd33e56129c529f: Status 404 returned error can't find the container with id 819b5de997e19e19a9d977e809d0fb3fdd9648622a344dd4ddd33e56129c529f Mar 19 09:21:21.511844 master-0 kubenswrapper[7385]: I0319 09:21:21.511819 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/0.log" Mar 19 09:21:21.512078 master-0 kubenswrapper[7385]: I0319 09:21:21.512056 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerStarted","Data":"955abd98da497abd3bbc8af184913584cf2b14be52bdce5885deda84e0aeecd4"} Mar 19 09:21:21.544738 master-0 kubenswrapper[7385]: I0319 09:21:21.544693 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" event={"ID":"d6cd2eac-6412-4f38-8272-743c67b218a3","Type":"ContainerStarted","Data":"aeeb874811e84346db41fb4fb7b6cad106590322b692edfbf0b6c383addea6a6"} Mar 19 09:21:21.605178 master-0 kubenswrapper[7385]: I0319 09:21:21.603997 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" event={"ID":"012cdc1d-ebc8-431e-9a52-9a39de95dd0d","Type":"ContainerStarted","Data":"121fbce462a7eafb62e39e83f1f28d2288860d27710d3e9a06350c53d4d1dd76"} Mar 19 09:21:21.633917 master-0 kubenswrapper[7385]: I0319 09:21:21.633354 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" event={"ID":"525b41b5-82d8-4d47-8350-79644a2c9360","Type":"ContainerStarted","Data":"70d174fd4e01098348af77daa0e495ddb88708e136a02b054e3fa91916dd11b3"} Mar 19 09:21:21.642379 master-0 kubenswrapper[7385]: I0319 09:21:21.642324 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd"] Mar 19 09:21:21.652216 master-0 kubenswrapper[7385]: I0319 09:21:21.652163 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb"] Mar 19 09:21:21.686927 master-0 kubenswrapper[7385]: I0319 09:21:21.673638 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerStarted","Data":"33a0eabafa6de07993391ffc1dff5fcd967838ca425e080e0901a5f9624f873e"} Mar 19 09:21:21.686927 master-0 kubenswrapper[7385]: I0319 09:21:21.674475 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:21:21.707938 master-0 kubenswrapper[7385]: I0319 09:21:21.705909 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b"] Mar 19 09:21:21.710693 master-0 kubenswrapper[7385]: I0319 09:21:21.710657 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lflg7" event={"ID":"bff5aeea-f859-4e38-bf1c-9e730025c212","Type":"ContainerStarted","Data":"280c1ab0d20d5f0a1fc3fe957fae99e999c792256be0729f4bd66bf08519c5bf"} Mar 19 09:21:21.713860 master-0 kubenswrapper[7385]: I0319 09:21:21.711868 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm"] Mar 19 09:21:21.716995 master-0 kubenswrapper[7385]: I0319 09:21:21.716972 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" event={"ID":"46c7cde3-2cb4-4fa8-94ca-d5feff877da9","Type":"ContainerStarted","Data":"b297a8532e011702860e4a5b995f83d018733a034a87ece0afddfd7437f7f8f5"} Mar 19 09:21:21.746215 master-0 kubenswrapper[7385]: I0319 09:21:21.746169 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" event={"ID":"a67ae8dc-240d-4708-9139-1d49c601e552","Type":"ContainerStarted","Data":"b5a43433ad01d4c8d725deb00c57fbbcb1186578ae1700355cef7f732ced844c"} Mar 19 09:21:21.779745 master-0 kubenswrapper[7385]: I0319 09:21:21.778806 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" event={"ID":"fe1881fb-c670-442a-a092-c1eee6b7d5e5","Type":"ContainerStarted","Data":"f4bffeec1cd2a6c9d1bd3d0557a50165f71cd47937001ed7d994ee96e6f4f2fd"} Mar 19 09:21:21.807853 master-0 kubenswrapper[7385]: I0319 09:21:21.805015 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" event={"ID":"70258988-8374-4aee-aaa2-be3c2e853062","Type":"ContainerStarted","Data":"3e4b6d4a6ba7dc16d944e3b9eee5d338268651e600b3b4017cd71ee472e3564c"} Mar 19 09:21:21.824666 master-0 kubenswrapper[7385]: I0319 09:21:21.822965 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-gkvf5_bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/network-operator/0.log" Mar 19 09:21:21.824666 master-0 kubenswrapper[7385]: I0319 09:21:21.823738 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" event={"ID":"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c","Type":"ContainerStarted","Data":"e66ba3a286909d7b97e3eb26b78b07f3a732764b9e9c91fe3431805ee65e9c6f"} Mar 19 09:21:21.865583 master-0 kubenswrapper[7385]: I0319 09:21:21.862318 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:21:22.194167 master-0 kubenswrapper[7385]: I0319 09:21:22.194105 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_e3ab0802-da8a-475c-a707-09f7838f580b/installer/0.log" Mar 19 09:21:22.194717 master-0 kubenswrapper[7385]: I0319 09:21:22.194192 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:21:22.313578 master-0 kubenswrapper[7385]: I0319 09:21:22.313506 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-kubelet-dir\") pod \"e3ab0802-da8a-475c-a707-09f7838f580b\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " Mar 19 09:21:22.313785 master-0 kubenswrapper[7385]: I0319 09:21:22.313738 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e3ab0802-da8a-475c-a707-09f7838f580b" (UID: "e3ab0802-da8a-475c-a707-09f7838f580b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:21:22.314646 master-0 kubenswrapper[7385]: I0319 09:21:22.314173 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3ab0802-da8a-475c-a707-09f7838f580b-kube-api-access\") pod \"e3ab0802-da8a-475c-a707-09f7838f580b\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " Mar 19 09:21:22.314646 master-0 kubenswrapper[7385]: I0319 09:21:22.314259 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-var-lock\") pod \"e3ab0802-da8a-475c-a707-09f7838f580b\" (UID: \"e3ab0802-da8a-475c-a707-09f7838f580b\") " Mar 19 09:21:22.314646 master-0 kubenswrapper[7385]: I0319 09:21:22.314368 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-var-lock" (OuterVolumeSpecName: "var-lock") pod "e3ab0802-da8a-475c-a707-09f7838f580b" (UID: "e3ab0802-da8a-475c-a707-09f7838f580b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:21:22.314646 master-0 kubenswrapper[7385]: I0319 09:21:22.314529 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:21:22.314646 master-0 kubenswrapper[7385]: I0319 09:21:22.314571 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab0802-da8a-475c-a707-09f7838f580b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:21:22.317053 master-0 kubenswrapper[7385]: I0319 09:21:22.316991 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ab0802-da8a-475c-a707-09f7838f580b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e3ab0802-da8a-475c-a707-09f7838f580b" (UID: "e3ab0802-da8a-475c-a707-09f7838f580b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:21:22.415751 master-0 kubenswrapper[7385]: I0319 09:21:22.415608 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3ab0802-da8a-475c-a707-09f7838f580b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:21:22.834666 master-0 kubenswrapper[7385]: I0319 09:21:22.834561 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" event={"ID":"ca2f7cb3-8812-4fe3-83a5-61668ef87f99","Type":"ContainerStarted","Data":"685e4b432ade20b1c50ec1b3266543948892457d2831f66c3796f3777b544a6e"} Mar 19 09:21:22.839433 master-0 kubenswrapper[7385]: I0319 09:21:22.839364 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" event={"ID":"c222998f-6211-4466-8ad7-5d9fcfb10789","Type":"ContainerStarted","Data":"59f6bd7c71cef38a081c02cb6b10f5c8ae38983e252b2ae11936cc10707847c9"} Mar 19 09:21:22.839433 master-0 kubenswrapper[7385]: I0319 09:21:22.839404 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" event={"ID":"c222998f-6211-4466-8ad7-5d9fcfb10789","Type":"ContainerStarted","Data":"9f898450aabd10f55a00aca1216b3ea60aa3a67621f1566bfc4bf787f1440f93"} Mar 19 09:21:22.839433 master-0 kubenswrapper[7385]: I0319 09:21:22.839414 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" event={"ID":"c222998f-6211-4466-8ad7-5d9fcfb10789","Type":"ContainerStarted","Data":"2b6eced12019f1a054184dc214ff7951a270b910027060a2b561a895337a163e"} Mar 19 09:21:22.840982 master-0 kubenswrapper[7385]: I0319 09:21:22.840956 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" event={"ID":"211d123b-829c-49dd-b119-e172cab607cf","Type":"ContainerStarted","Data":"8928fc78a20804bb52860e947962b354cf91d1529b5deb719ab35788e3ef8791"} Mar 19 09:21:22.843976 master-0 kubenswrapper[7385]: I0319 09:21:22.843951 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" event={"ID":"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4","Type":"ContainerStarted","Data":"c28fd5198d7f8466f8d4a9327cbc9eb5d80742ce9844b91bf8ba1a1a20dc6eae"} Mar 19 09:21:22.845278 master-0 kubenswrapper[7385]: I0319 09:21:22.845255 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"0e413507c6f4a8e010e922bcd426014dd970b85408295730281ace1a504f9959"} Mar 19 09:21:22.847790 master-0 kubenswrapper[7385]: I0319 09:21:22.847743 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" event={"ID":"3816f149-ddce-41c8-a540-fe866ee71c5e","Type":"ContainerStarted","Data":"61558dca744350def3b0a516cd7192d3505c64b58643571bb0e2e07f06bffb85"} Mar 19 09:21:22.849679 master-0 kubenswrapper[7385]: I0319 09:21:22.849617 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" event={"ID":"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc","Type":"ContainerStarted","Data":"a5ee324efd32a0146a7d92c8e13c95cf3a24ab8f0310db8a2d3895929ad3075e"} Mar 19 09:21:22.849679 master-0 kubenswrapper[7385]: I0319 09:21:22.849648 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" event={"ID":"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc","Type":"ContainerStarted","Data":"5aefb6138adeb7d46c141d72648e74fb238235b8d8af02bde5beca7c384d92e7"} Mar 19 09:21:22.853619 master-0 kubenswrapper[7385]: I0319 09:21:22.853389 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_e3ab0802-da8a-475c-a707-09f7838f580b/installer/0.log" Mar 19 09:21:22.853619 master-0 kubenswrapper[7385]: I0319 09:21:22.853486 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:21:22.856433 master-0 kubenswrapper[7385]: I0319 09:21:22.853864 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"e3ab0802-da8a-475c-a707-09f7838f580b","Type":"ContainerDied","Data":"c2904eb335d23e11e23721447bebed6e83898b398c508def8b073f85f1f0f7e4"} Mar 19 09:21:22.856433 master-0 kubenswrapper[7385]: I0319 09:21:22.853904 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2904eb335d23e11e23721447bebed6e83898b398c508def8b073f85f1f0f7e4" Mar 19 09:21:22.856433 master-0 kubenswrapper[7385]: I0319 09:21:22.855410 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" event={"ID":"3a07456d-2e8e-4e80-a777-d0903ad21f07","Type":"ContainerStarted","Data":"819b5de997e19e19a9d977e809d0fb3fdd9648622a344dd4ddd33e56129c529f"} Mar 19 09:21:22.857931 master-0 kubenswrapper[7385]: I0319 09:21:22.857186 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" event={"ID":"676f4062-ea34-48d0-80d7-3cd3d9da341e","Type":"ContainerStarted","Data":"6b36bbd0455724f4c84a788594d831cdec4b648d0e41f4b0f6e9ae8e3b529de5"} Mar 19 09:21:23.666030 master-0 kubenswrapper[7385]: I0319 09:21:23.665980 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:21:24.317338 master-0 kubenswrapper[7385]: I0319 09:21:24.317286 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 19 09:21:24.317338 master-0 kubenswrapper[7385]: I0319 09:21:24.317328 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 19 09:21:24.337766 master-0 kubenswrapper[7385]: I0319 09:21:24.337700 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 19 09:21:24.860861 master-0 kubenswrapper[7385]: I0319 09:21:24.860809 7385 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:25.539583 master-0 kubenswrapper[7385]: I0319 09:21:25.539451 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:21:25.539941 master-0 kubenswrapper[7385]: I0319 09:21:25.539590 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:25.831399 master-0 kubenswrapper[7385]: I0319 09:21:25.831193 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:21:25.831399 master-0 kubenswrapper[7385]: I0319 09:21:25.831255 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:28.541755 master-0 kubenswrapper[7385]: I0319 09:21:28.541690 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:21:28.542324 master-0 kubenswrapper[7385]: I0319 09:21:28.541767 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:28.830662 master-0 kubenswrapper[7385]: I0319 09:21:28.830526 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:21:28.830662 master-0 kubenswrapper[7385]: I0319 09:21:28.830616 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:28.886237 master-0 kubenswrapper[7385]: I0319 09:21:28.886110 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/1.log" Mar 19 09:21:28.886972 master-0 kubenswrapper[7385]: I0319 09:21:28.886750 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/0.log" Mar 19 09:21:28.886972 master-0 kubenswrapper[7385]: I0319 09:21:28.886796 7385 generic.go:334] "Generic (PLEG): container finished" podID="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" containerID="620239dc4a60804d8418bde885755ec6483c00980113b997aa1fddf56697d09e" exitCode=255 Mar 19 09:21:28.886972 master-0 kubenswrapper[7385]: I0319 09:21:28.886826 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" event={"ID":"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff","Type":"ContainerDied","Data":"620239dc4a60804d8418bde885755ec6483c00980113b997aa1fddf56697d09e"} Mar 19 09:21:28.886972 master-0 kubenswrapper[7385]: I0319 09:21:28.886867 7385 scope.go:117] "RemoveContainer" containerID="edc97cab8d1c4b85265dcfce231bf29161c0caac67a28ad74d915ec1fff0a681" Mar 19 09:21:28.887359 master-0 kubenswrapper[7385]: I0319 09:21:28.887206 7385 scope.go:117] "RemoveContainer" containerID="620239dc4a60804d8418bde885755ec6483c00980113b997aa1fddf56697d09e" Mar 19 09:21:28.887422 master-0 kubenswrapper[7385]: E0319 09:21:28.887386 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8c94f4649-6vplt_openshift-controller-manager-operator(16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" podUID="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" Mar 19 09:21:29.341812 master-0 kubenswrapper[7385]: I0319 09:21:29.341758 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 19 09:21:29.892942 master-0 kubenswrapper[7385]: I0319 09:21:29.892809 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" event={"ID":"d6cd2eac-6412-4f38-8272-743c67b218a3","Type":"ContainerStarted","Data":"405f9880ce91d786192d330c1e84c542474ebb205faf0f516cd0ea59e7fb46ac"} Mar 19 09:21:29.894468 master-0 kubenswrapper[7385]: I0319 09:21:29.894436 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" event={"ID":"211d123b-829c-49dd-b119-e172cab607cf","Type":"ContainerStarted","Data":"5b1fa2288fbb6c83fe28f1b0f95b8c94940d825ab5e2fc0f79fedb82fc0c7b9c"} Mar 19 09:21:29.894708 master-0 kubenswrapper[7385]: I0319 09:21:29.894686 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:21:29.897383 master-0 kubenswrapper[7385]: I0319 09:21:29.897329 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/1.log" Mar 19 09:21:29.900584 master-0 kubenswrapper[7385]: I0319 09:21:29.900527 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" event={"ID":"3816f149-ddce-41c8-a540-fe866ee71c5e","Type":"ContainerStarted","Data":"a0def10435beba37cc4f2c51d6d95e5b8b0c440dcd92fc57f96ff4a342fc9bce"} Mar 19 09:21:29.900651 master-0 kubenswrapper[7385]: I0319 09:21:29.900594 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" event={"ID":"3816f149-ddce-41c8-a540-fe866ee71c5e","Type":"ContainerStarted","Data":"878e0d63701a1caf794ebb2ed5a4a759d206a20246066ad1acd5bdfd53aa835e"} Mar 19 09:21:29.901215 master-0 kubenswrapper[7385]: I0319 09:21:29.901193 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:21:29.902830 master-0 kubenswrapper[7385]: I0319 09:21:29.902608 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" event={"ID":"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc","Type":"ContainerStarted","Data":"8ba7304329f0a0ad38a3e444273ac007e5708e5106ec4cfd0157e01f42d39e4e"} Mar 19 09:21:29.902887 master-0 kubenswrapper[7385]: I0319 09:21:29.902845 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:21:29.904373 master-0 kubenswrapper[7385]: I0319 09:21:29.904343 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" event={"ID":"58fbf09a-3a26-45ab-8496-11d05c27e9cf","Type":"ContainerStarted","Data":"f7583682489ded760629cc15df0f0f40f6512cf0cba6d9c07d62c71cf5d0483d"} Mar 19 09:21:29.905238 master-0 kubenswrapper[7385]: I0319 09:21:29.905204 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:21:29.907115 master-0 kubenswrapper[7385]: I0319 09:21:29.907078 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" event={"ID":"3a07456d-2e8e-4e80-a777-d0903ad21f07","Type":"ContainerStarted","Data":"ca3029346e062d2d3d62a9a5d01ee28f260f1a56504c9e25b0b2c54db2ac4665"} Mar 19 09:21:29.907162 master-0 kubenswrapper[7385]: I0319 09:21:29.907113 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" event={"ID":"3a07456d-2e8e-4e80-a777-d0903ad21f07","Type":"ContainerStarted","Data":"4aeb041310edd04cfbff93e5aeff660e2a5fd04a8635a1408afa36607a005d38"} Mar 19 09:21:29.908962 master-0 kubenswrapper[7385]: I0319 09:21:29.908928 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lflg7" event={"ID":"bff5aeea-f859-4e38-bf1c-9e730025c212","Type":"ContainerStarted","Data":"2b4dc7754fc6520cb45e45202935522f8f42cb83ddc35918f638769f141253e0"} Mar 19 09:21:29.909018 master-0 kubenswrapper[7385]: I0319 09:21:29.908961 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lflg7" event={"ID":"bff5aeea-f859-4e38-bf1c-9e730025c212","Type":"ContainerStarted","Data":"2961d010e71cd62b55f3adb68f394352b99c5999a169b868f095887268b4c4e1"} Mar 19 09:21:29.911063 master-0 kubenswrapper[7385]: I0319 09:21:29.911021 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" event={"ID":"676f4062-ea34-48d0-80d7-3cd3d9da341e","Type":"ContainerStarted","Data":"49917dba45c68841ca8137fddad83ec1b137c19ecfa2c2af03a86c3564c17dc1"} Mar 19 09:21:29.911902 master-0 kubenswrapper[7385]: I0319 09:21:29.911873 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:21:29.915233 master-0 kubenswrapper[7385]: I0319 09:21:29.915152 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"a17d89cfaf488bcffad3e4931f3e6298b20b65468ca77b1d142866743dc10866"} Mar 19 09:21:29.915321 master-0 kubenswrapper[7385]: I0319 09:21:29.915240 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"c8c2b59685bc30549de9bfd2d5a139e18ceba9f4c5c0b572b2cf26e45dd85e1b"} Mar 19 09:21:29.917070 master-0 kubenswrapper[7385]: I0319 09:21:29.917048 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" event={"ID":"e25a16f3-dfe0-49c5-a31d-e310d369f406","Type":"ContainerStarted","Data":"e71afd7932d2180cdb4fb9c5d3d7b2d27526d258db8e8846bd146afd4bee3cc1"} Mar 19 09:21:29.917463 master-0 kubenswrapper[7385]: I0319 09:21:29.917427 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:21:29.921674 master-0 kubenswrapper[7385]: I0319 09:21:29.921632 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:21:31.540627 master-0 kubenswrapper[7385]: I0319 09:21:31.540516 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:21:31.540627 master-0 kubenswrapper[7385]: I0319 09:21:31.540592 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:31.830887 master-0 kubenswrapper[7385]: I0319 09:21:31.830705 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:21:31.830887 master-0 kubenswrapper[7385]: I0319 09:21:31.830838 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:21:31.831300 master-0 kubenswrapper[7385]: I0319 09:21:31.830935 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:21:31.831971 master-0 kubenswrapper[7385]: I0319 09:21:31.831906 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"33a0eabafa6de07993391ffc1dff5fcd967838ca425e080e0901a5f9624f873e"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 19 09:21:31.832144 master-0 kubenswrapper[7385]: I0319 09:21:31.831994 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" containerID="cri-o://33a0eabafa6de07993391ffc1dff5fcd967838ca425e080e0901a5f9624f873e" gracePeriod=30 Mar 19 09:21:32.309015 master-0 kubenswrapper[7385]: I0319 09:21:32.308936 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": read tcp 10.128.0.2:60828->10.128.0.18:8443: read: connection reset by peer" start-of-body= Mar 19 09:21:32.309283 master-0 kubenswrapper[7385]: I0319 09:21:32.309043 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": read tcp 10.128.0.2:60828->10.128.0.18:8443: read: connection reset by peer" Mar 19 09:21:32.937456 master-0 kubenswrapper[7385]: I0319 09:21:32.937390 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-2k7c5_d66c30b6-67ad-4864-8b51-0424d462ac98/openshift-config-operator/1.log" Mar 19 09:21:32.938529 master-0 kubenswrapper[7385]: I0319 09:21:32.938468 7385 generic.go:334] "Generic (PLEG): container finished" podID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerID="33a0eabafa6de07993391ffc1dff5fcd967838ca425e080e0901a5f9624f873e" exitCode=255 Mar 19 09:21:32.938529 master-0 kubenswrapper[7385]: I0319 09:21:32.938499 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerDied","Data":"33a0eabafa6de07993391ffc1dff5fcd967838ca425e080e0901a5f9624f873e"} Mar 19 09:21:32.938529 master-0 kubenswrapper[7385]: I0319 09:21:32.938530 7385 scope.go:117] "RemoveContainer" containerID="8c68ece13612c392b8986c6036f0fb5686c420aa3d85d8318f1363a956c12d2e" Mar 19 09:21:33.004311 master-0 kubenswrapper[7385]: I0319 09:21:33.004268 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:21:33.019373 master-0 kubenswrapper[7385]: I0319 09:21:33.019323 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:21:33.540063 master-0 kubenswrapper[7385]: I0319 09:21:33.539930 7385 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-2k7c5 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" start-of-body= Mar 19 09:21:33.540063 master-0 kubenswrapper[7385]: I0319 09:21:33.540006 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" podUID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.18:8443/healthz\": dial tcp 10.128.0.18:8443: connect: connection refused" Mar 19 09:21:33.722023 master-0 kubenswrapper[7385]: I0319 09:21:33.721970 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 09:21:33.831520 master-0 kubenswrapper[7385]: I0319 09:21:33.831352 7385 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-7v7bv container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 19 09:21:33.831520 master-0 kubenswrapper[7385]: I0319 09:21:33.831372 7385 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-7v7bv container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" start-of-body= Mar 19 09:21:33.831520 master-0 kubenswrapper[7385]: I0319 09:21:33.831410 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" podUID="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/healthz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 19 09:21:33.831520 master-0 kubenswrapper[7385]: I0319 09:21:33.831408 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" podUID="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.40:8081/readyz\": dial tcp 10.128.0.40:8081: connect: connection refused" Mar 19 09:21:33.945606 master-0 kubenswrapper[7385]: I0319 09:21:33.945515 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/0.log" Mar 19 09:21:33.945606 master-0 kubenswrapper[7385]: I0319 09:21:33.945599 7385 generic.go:334] "Generic (PLEG): container finished" podID="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" containerID="02033eb14ea31d2437ce887b5f2e88f1b7e843f260536c63c7e107349723d088" exitCode=1 Mar 19 09:21:33.946452 master-0 kubenswrapper[7385]: I0319 09:21:33.945670 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" event={"ID":"d5d9fbaf-ba14-4d2b-8376-1634eabbc782","Type":"ContainerDied","Data":"02033eb14ea31d2437ce887b5f2e88f1b7e843f260536c63c7e107349723d088"} Mar 19 09:21:33.946452 master-0 kubenswrapper[7385]: I0319 09:21:33.946181 7385 scope.go:117] "RemoveContainer" containerID="02033eb14ea31d2437ce887b5f2e88f1b7e843f260536c63c7e107349723d088" Mar 19 09:21:33.956743 master-0 kubenswrapper[7385]: I0319 09:21:33.956067 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-2k7c5_d66c30b6-67ad-4864-8b51-0424d462ac98/openshift-config-operator/1.log" Mar 19 09:21:33.956743 master-0 kubenswrapper[7385]: I0319 09:21:33.956594 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerStarted","Data":"a42491788debafa4b5caebd582505d3e959b8406cff2a3c8d4b9e3e0ecd564e8"} Mar 19 09:21:33.974773 master-0 kubenswrapper[7385]: E0319 09:21:33.974723 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 19 09:21:34.005632 master-0 kubenswrapper[7385]: I0319 09:21:34.005527 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.005511767 podStartE2EDuration="1.005511767s" podCreationTimestamp="2026-03-19 09:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:21:33.98666826 +0000 UTC m=+189.661097961" watchObservedRunningTime="2026-03-19 09:21:34.005511767 +0000 UTC m=+189.679941468" Mar 19 09:21:34.202336 master-0 kubenswrapper[7385]: I0319 09:21:34.202284 7385 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" containerID="cri-o://33a0eabafa6de07993391ffc1dff5fcd967838ca425e080e0901a5f9624f873e" Mar 19 09:21:34.202336 master-0 kubenswrapper[7385]: I0319 09:21:34.202320 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:21:34.963560 master-0 kubenswrapper[7385]: I0319 09:21:34.963498 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/0.log" Mar 19 09:21:34.964155 master-0 kubenswrapper[7385]: I0319 09:21:34.963637 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" event={"ID":"d5d9fbaf-ba14-4d2b-8376-1634eabbc782","Type":"ContainerStarted","Data":"167cce93a07388fd74c14d6f7c9fcb3960b363bc259d8edc2e5ed4f902650640"} Mar 19 09:21:34.964316 master-0 kubenswrapper[7385]: I0319 09:21:34.964281 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:21:34.964403 master-0 kubenswrapper[7385]: I0319 09:21:34.964332 7385 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" containerID="cri-o://02033eb14ea31d2437ce887b5f2e88f1b7e843f260536c63c7e107349723d088" Mar 19 09:21:34.964403 master-0 kubenswrapper[7385]: I0319 09:21:34.964344 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:21:35.969313 master-0 kubenswrapper[7385]: I0319 09:21:35.969244 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:21:36.542563 master-0 kubenswrapper[7385]: I0319 09:21:36.542500 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:21:37.382433 master-0 kubenswrapper[7385]: I0319 09:21:37.382380 7385 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-rgzxb container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 19 09:21:37.382983 master-0 kubenswrapper[7385]: I0319 09:21:37.382423 7385 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-rgzxb container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" start-of-body= Mar 19 09:21:37.382983 master-0 kubenswrapper[7385]: I0319 09:21:37.382483 7385 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" podUID="d58c6b38-ef11-465c-9fee-b83b84ce4669" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/healthz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 19 09:21:37.382983 master-0 kubenswrapper[7385]: I0319 09:21:37.382436 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" podUID="d58c6b38-ef11-465c-9fee-b83b84ce4669" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.39:8081/readyz\": dial tcp 10.128.0.39:8081: connect: connection refused" Mar 19 09:21:37.982292 master-0 kubenswrapper[7385]: I0319 09:21:37.981327 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/0.log" Mar 19 09:21:37.982292 master-0 kubenswrapper[7385]: I0319 09:21:37.981634 7385 generic.go:334] "Generic (PLEG): container finished" podID="d58c6b38-ef11-465c-9fee-b83b84ce4669" containerID="742f2b9c536e8374c80963c76d1696cff2ac061aef9be3d98e75e3dbbdd21557" exitCode=1 Mar 19 09:21:37.982292 master-0 kubenswrapper[7385]: I0319 09:21:37.981667 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" event={"ID":"d58c6b38-ef11-465c-9fee-b83b84ce4669","Type":"ContainerDied","Data":"742f2b9c536e8374c80963c76d1696cff2ac061aef9be3d98e75e3dbbdd21557"} Mar 19 09:21:37.982292 master-0 kubenswrapper[7385]: I0319 09:21:37.982072 7385 scope.go:117] "RemoveContainer" containerID="742f2b9c536e8374c80963c76d1696cff2ac061aef9be3d98e75e3dbbdd21557" Mar 19 09:21:38.988996 master-0 kubenswrapper[7385]: I0319 09:21:38.988929 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/0.log" Mar 19 09:21:38.989526 master-0 kubenswrapper[7385]: I0319 09:21:38.989393 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" event={"ID":"d58c6b38-ef11-465c-9fee-b83b84ce4669","Type":"ContainerStarted","Data":"dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0"} Mar 19 09:21:38.989767 master-0 kubenswrapper[7385]: I0319 09:21:38.989727 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:21:38.991090 master-0 kubenswrapper[7385]: I0319 09:21:38.991047 7385 generic.go:334] "Generic (PLEG): container finished" podID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerID="0efec46299b67a0eea4b13ca67058dc6945af55d88748d9fe42464dc879df463" exitCode=0 Mar 19 09:21:38.991173 master-0 kubenswrapper[7385]: I0319 09:21:38.991094 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" event={"ID":"c1f4f7b3-7f79-4618-b87a-400cadcb9813","Type":"ContainerDied","Data":"0efec46299b67a0eea4b13ca67058dc6945af55d88748d9fe42464dc879df463"} Mar 19 09:21:38.991561 master-0 kubenswrapper[7385]: I0319 09:21:38.991525 7385 scope.go:117] "RemoveContainer" containerID="0efec46299b67a0eea4b13ca67058dc6945af55d88748d9fe42464dc879df463" Mar 19 09:21:39.540294 master-0 kubenswrapper[7385]: I0319 09:21:39.540240 7385 scope.go:117] "RemoveContainer" containerID="620239dc4a60804d8418bde885755ec6483c00980113b997aa1fddf56697d09e" Mar 19 09:21:39.996519 master-0 kubenswrapper[7385]: I0319 09:21:39.996475 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" event={"ID":"c1f4f7b3-7f79-4618-b87a-400cadcb9813","Type":"ContainerStarted","Data":"347a1d3bec5889e9ec93363cf938da9436a428160ad0dc8a308e691fb255063e"} Mar 19 09:21:39.997027 master-0 kubenswrapper[7385]: I0319 09:21:39.996852 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:21:39.998706 master-0 kubenswrapper[7385]: I0319 09:21:39.998671 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/1.log" Mar 19 09:21:39.998831 master-0 kubenswrapper[7385]: I0319 09:21:39.998802 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" event={"ID":"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff","Type":"ContainerStarted","Data":"2751edab0e5b28c2276ba8098e2b1a5d1e939a202ad40429cef225d655447417"} Mar 19 09:21:40.010465 master-0 kubenswrapper[7385]: I0319 09:21:40.010424 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:21:43.833714 master-0 kubenswrapper[7385]: I0319 09:21:43.833658 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:21:47.383165 master-0 kubenswrapper[7385]: I0319 09:21:47.383121 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:21:49.616663 master-0 kubenswrapper[7385]: I0319 09:21:49.616599 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l26xf"] Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: E0319 09:21:49.616776 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9969717-8350-416e-8711-877cdf557d81" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616788 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9969717-8350-416e-8711-877cdf557d81" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: E0319 09:21:49.616805 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec98e408-a574-40eb-b84d-111edbaab81a" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616811 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec98e408-a574-40eb-b84d-111edbaab81a" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: E0319 09:21:49.616816 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616822 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: E0319 09:21:49.616832 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ab0802-da8a-475c-a707-09f7838f580b" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616837 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ab0802-da8a-475c-a707-09f7838f580b" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616914 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616923 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec98e408-a574-40eb-b84d-111edbaab81a" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616932 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ab0802-da8a-475c-a707-09f7838f580b" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.616940 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9969717-8350-416e-8711-877cdf557d81" containerName="installer" Mar 19 09:21:49.617635 master-0 kubenswrapper[7385]: I0319 09:21:49.617517 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.619785 master-0 kubenswrapper[7385]: I0319 09:21:49.619741 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-g7f7m" Mar 19 09:21:49.622156 master-0 kubenswrapper[7385]: I0319 09:21:49.622089 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brpbp"] Mar 19 09:21:49.623631 master-0 kubenswrapper[7385]: I0319 09:21:49.623491 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.625529 master-0 kubenswrapper[7385]: I0319 09:21:49.625484 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x7brr" Mar 19 09:21:49.647139 master-0 kubenswrapper[7385]: I0319 09:21:49.646467 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-887wl"] Mar 19 09:21:49.653121 master-0 kubenswrapper[7385]: I0319 09:21:49.652728 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-utilities\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.653121 master-0 kubenswrapper[7385]: I0319 09:21:49.652813 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwbr\" (UniqueName: \"kubernetes.io/projected/d504cbc7-5c09-4712-9f7a-c41a6386ef79-kube-api-access-tmwbr\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.653121 master-0 kubenswrapper[7385]: I0319 09:21:49.652846 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-utilities\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.653121 master-0 kubenswrapper[7385]: I0319 09:21:49.652881 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-catalog-content\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.653121 master-0 kubenswrapper[7385]: I0319 09:21:49.652910 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-catalog-content\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.653121 master-0 kubenswrapper[7385]: I0319 09:21:49.652942 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mncvz\" (UniqueName: \"kubernetes.io/projected/e8a7e077-3f6c-4efb-9865-cf82480c5da1-kube-api-access-mncvz\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.653830 master-0 kubenswrapper[7385]: I0319 09:21:49.653403 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.655448 master-0 kubenswrapper[7385]: I0319 09:21:49.655363 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2ncgt" Mar 19 09:21:49.655597 master-0 kubenswrapper[7385]: I0319 09:21:49.655495 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7cczg"] Mar 19 09:21:49.658356 master-0 kubenswrapper[7385]: I0319 09:21:49.656403 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l26xf"] Mar 19 09:21:49.658356 master-0 kubenswrapper[7385]: I0319 09:21:49.656512 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.658356 master-0 kubenswrapper[7385]: I0319 09:21:49.656924 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brpbp"] Mar 19 09:21:49.658663 master-0 kubenswrapper[7385]: I0319 09:21:49.658377 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-nf86g" Mar 19 09:21:49.659740 master-0 kubenswrapper[7385]: I0319 09:21:49.659694 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7cczg"] Mar 19 09:21:49.713876 master-0 kubenswrapper[7385]: I0319 09:21:49.713786 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-887wl"] Mar 19 09:21:49.754401 master-0 kubenswrapper[7385]: I0319 09:21:49.754351 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-utilities\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.754401 master-0 kubenswrapper[7385]: I0319 09:21:49.754402 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-utilities\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754449 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwbr\" (UniqueName: \"kubernetes.io/projected/d504cbc7-5c09-4712-9f7a-c41a6386ef79-kube-api-access-tmwbr\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754473 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-utilities\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754509 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-catalog-content\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754536 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-catalog-content\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754584 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mncvz\" (UniqueName: \"kubernetes.io/projected/e8a7e077-3f6c-4efb-9865-cf82480c5da1-kube-api-access-mncvz\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754612 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8p7b\" (UniqueName: \"kubernetes.io/projected/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-kube-api-access-g8p7b\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754643 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-catalog-content\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754673 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-utilities\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754700 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-catalog-content\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.754725 master-0 kubenswrapper[7385]: I0319 09:21:49.754727 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxn9l\" (UniqueName: \"kubernetes.io/projected/72756f50-c970-4ef6-b8ca-88e49f996a74-kube-api-access-zxn9l\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.755672 master-0 kubenswrapper[7385]: I0319 09:21:49.755639 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-utilities\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.755672 master-0 kubenswrapper[7385]: I0319 09:21:49.755659 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-utilities\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.755943 master-0 kubenswrapper[7385]: I0319 09:21:49.755900 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-catalog-content\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.756041 master-0 kubenswrapper[7385]: I0319 09:21:49.756018 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-catalog-content\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.770955 master-0 kubenswrapper[7385]: I0319 09:21:49.770876 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwbr\" (UniqueName: \"kubernetes.io/projected/d504cbc7-5c09-4712-9f7a-c41a6386ef79-kube-api-access-tmwbr\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:49.772217 master-0 kubenswrapper[7385]: I0319 09:21:49.772174 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mncvz\" (UniqueName: \"kubernetes.io/projected/e8a7e077-3f6c-4efb-9865-cf82480c5da1-kube-api-access-mncvz\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:49.855611 master-0 kubenswrapper[7385]: I0319 09:21:49.855514 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8p7b\" (UniqueName: \"kubernetes.io/projected/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-kube-api-access-g8p7b\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.855611 master-0 kubenswrapper[7385]: I0319 09:21:49.855600 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-catalog-content\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.855908 master-0 kubenswrapper[7385]: I0319 09:21:49.855798 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-catalog-content\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.856018 master-0 kubenswrapper[7385]: I0319 09:21:49.855988 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxn9l\" (UniqueName: \"kubernetes.io/projected/72756f50-c970-4ef6-b8ca-88e49f996a74-kube-api-access-zxn9l\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.856055 master-0 kubenswrapper[7385]: I0319 09:21:49.856034 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-utilities\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.856125 master-0 kubenswrapper[7385]: I0319 09:21:49.856057 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-utilities\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.856334 master-0 kubenswrapper[7385]: I0319 09:21:49.856210 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-catalog-content\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.856525 master-0 kubenswrapper[7385]: I0319 09:21:49.856497 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-catalog-content\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.856827 master-0 kubenswrapper[7385]: I0319 09:21:49.856790 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-utilities\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.856865 master-0 kubenswrapper[7385]: I0319 09:21:49.856793 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-utilities\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.872028 master-0 kubenswrapper[7385]: I0319 09:21:49.871838 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxn9l\" (UniqueName: \"kubernetes.io/projected/72756f50-c970-4ef6-b8ca-88e49f996a74-kube-api-access-zxn9l\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:49.875085 master-0 kubenswrapper[7385]: I0319 09:21:49.875036 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8p7b\" (UniqueName: \"kubernetes.io/projected/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-kube-api-access-g8p7b\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:49.993756 master-0 kubenswrapper[7385]: I0319 09:21:49.993657 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:21:50.007911 master-0 kubenswrapper[7385]: I0319 09:21:50.007841 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:21:50.027603 master-0 kubenswrapper[7385]: I0319 09:21:50.027387 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:21:50.046599 master-0 kubenswrapper[7385]: I0319 09:21:50.045952 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:21:50.396412 master-0 kubenswrapper[7385]: I0319 09:21:50.396377 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l26xf"] Mar 19 09:21:50.401184 master-0 kubenswrapper[7385]: W0319 09:21:50.398929 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd504cbc7_5c09_4712_9f7a_c41a6386ef79.slice/crio-da2e551f19738e875d8b4b505223588d9ea94eb7716af7e0ff449212c8514bb4 WatchSource:0}: Error finding container da2e551f19738e875d8b4b505223588d9ea94eb7716af7e0ff449212c8514bb4: Status 404 returned error can't find the container with id da2e551f19738e875d8b4b505223588d9ea94eb7716af7e0ff449212c8514bb4 Mar 19 09:21:50.462007 master-0 kubenswrapper[7385]: I0319 09:21:50.461942 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brpbp"] Mar 19 09:21:50.475011 master-0 kubenswrapper[7385]: W0319 09:21:50.474975 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8a7e077_3f6c_4efb_9865_cf82480c5da1.slice/crio-fc42f33929f2a6b9103f7b23ae3ef7d3e614662550ded98a184c1328a4069b14 WatchSource:0}: Error finding container fc42f33929f2a6b9103f7b23ae3ef7d3e614662550ded98a184c1328a4069b14: Status 404 returned error can't find the container with id fc42f33929f2a6b9103f7b23ae3ef7d3e614662550ded98a184c1328a4069b14 Mar 19 09:21:50.511779 master-0 kubenswrapper[7385]: I0319 09:21:50.510522 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-887wl"] Mar 19 09:21:50.512797 master-0 kubenswrapper[7385]: I0319 09:21:50.512765 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7cczg"] Mar 19 09:21:50.521079 master-0 kubenswrapper[7385]: W0319 09:21:50.521034 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72756f50_c970_4ef6_b8ca_88e49f996a74.slice/crio-3014cb772787d6c5ed5213751efdfc2f600b71700a9642b8657868066aed7a56 WatchSource:0}: Error finding container 3014cb772787d6c5ed5213751efdfc2f600b71700a9642b8657868066aed7a56: Status 404 returned error can't find the container with id 3014cb772787d6c5ed5213751efdfc2f600b71700a9642b8657868066aed7a56 Mar 19 09:21:51.049833 master-0 kubenswrapper[7385]: I0319 09:21:51.049710 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-887wl" event={"ID":"72756f50-c970-4ef6-b8ca-88e49f996a74","Type":"ContainerDied","Data":"2968fa6613c3b81628d2874b0078774e9f4bca1ed372f6d47f80eb1be0cf6041"} Mar 19 09:21:51.049833 master-0 kubenswrapper[7385]: I0319 09:21:51.049617 7385 generic.go:334] "Generic (PLEG): container finished" podID="72756f50-c970-4ef6-b8ca-88e49f996a74" containerID="2968fa6613c3b81628d2874b0078774e9f4bca1ed372f6d47f80eb1be0cf6041" exitCode=0 Mar 19 09:21:51.050476 master-0 kubenswrapper[7385]: I0319 09:21:51.049933 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-887wl" event={"ID":"72756f50-c970-4ef6-b8ca-88e49f996a74","Type":"ContainerStarted","Data":"3014cb772787d6c5ed5213751efdfc2f600b71700a9642b8657868066aed7a56"} Mar 19 09:21:51.053793 master-0 kubenswrapper[7385]: I0319 09:21:51.053759 7385 generic.go:334] "Generic (PLEG): container finished" podID="e8a7e077-3f6c-4efb-9865-cf82480c5da1" containerID="84f6a11d7eb8cf18422a0a99e6bf0998baa0e9649ec5853b155cb1c537b44211" exitCode=0 Mar 19 09:21:51.053906 master-0 kubenswrapper[7385]: I0319 09:21:51.053824 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brpbp" event={"ID":"e8a7e077-3f6c-4efb-9865-cf82480c5da1","Type":"ContainerDied","Data":"84f6a11d7eb8cf18422a0a99e6bf0998baa0e9649ec5853b155cb1c537b44211"} Mar 19 09:21:51.053906 master-0 kubenswrapper[7385]: I0319 09:21:51.053848 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brpbp" event={"ID":"e8a7e077-3f6c-4efb-9865-cf82480c5da1","Type":"ContainerStarted","Data":"fc42f33929f2a6b9103f7b23ae3ef7d3e614662550ded98a184c1328a4069b14"} Mar 19 09:21:51.057909 master-0 kubenswrapper[7385]: I0319 09:21:51.057837 7385 generic.go:334] "Generic (PLEG): container finished" podID="d504cbc7-5c09-4712-9f7a-c41a6386ef79" containerID="213d80691bc22d27ded0500e6da740b91a8a30071ba53b718db2c03bf881bbb2" exitCode=0 Mar 19 09:21:51.058039 master-0 kubenswrapper[7385]: I0319 09:21:51.057927 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l26xf" event={"ID":"d504cbc7-5c09-4712-9f7a-c41a6386ef79","Type":"ContainerDied","Data":"213d80691bc22d27ded0500e6da740b91a8a30071ba53b718db2c03bf881bbb2"} Mar 19 09:21:51.058039 master-0 kubenswrapper[7385]: I0319 09:21:51.057967 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l26xf" event={"ID":"d504cbc7-5c09-4712-9f7a-c41a6386ef79","Type":"ContainerStarted","Data":"da2e551f19738e875d8b4b505223588d9ea94eb7716af7e0ff449212c8514bb4"} Mar 19 09:21:51.064750 master-0 kubenswrapper[7385]: I0319 09:21:51.064709 7385 generic.go:334] "Generic (PLEG): container finished" podID="5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5" containerID="be8bce13d740e2f9e98bf0d2d8675ba153adc7ecfb63753dde92f39709976021" exitCode=0 Mar 19 09:21:51.064848 master-0 kubenswrapper[7385]: I0319 09:21:51.064754 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cczg" event={"ID":"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5","Type":"ContainerDied","Data":"be8bce13d740e2f9e98bf0d2d8675ba153adc7ecfb63753dde92f39709976021"} Mar 19 09:21:51.064848 master-0 kubenswrapper[7385]: I0319 09:21:51.064792 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cczg" event={"ID":"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5","Type":"ContainerStarted","Data":"16ae0be12cb0948b576a88de76c552bf6bb4908608f91f6bc384118d39093798"} Mar 19 09:21:51.576137 master-0 kubenswrapper[7385]: I0319 09:21:51.576080 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rw7tg"] Mar 19 09:21:51.577054 master-0 kubenswrapper[7385]: I0319 09:21:51.577024 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.582116 master-0 kubenswrapper[7385]: I0319 09:21:51.582072 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 09:21:51.592044 master-0 kubenswrapper[7385]: I0319 09:21:51.590855 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xmjpx" Mar 19 09:21:51.678655 master-0 kubenswrapper[7385]: I0319 09:21:51.677388 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.678655 master-0 kubenswrapper[7385]: I0319 09:21:51.677590 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.678655 master-0 kubenswrapper[7385]: I0319 09:21:51.677627 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp5rd\" (UniqueName: \"kubernetes.io/projected/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-kube-api-access-rp5rd\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.678655 master-0 kubenswrapper[7385]: I0319 09:21:51.677688 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-rootfs\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.714996 master-0 kubenswrapper[7385]: I0319 09:21:51.713444 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m"] Mar 19 09:21:51.714996 master-0 kubenswrapper[7385]: I0319 09:21:51.714298 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.718079 master-0 kubenswrapper[7385]: I0319 09:21:51.717821 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vkwb4" Mar 19 09:21:51.718079 master-0 kubenswrapper[7385]: I0319 09:21:51.718023 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 09:21:51.718251 master-0 kubenswrapper[7385]: I0319 09:21:51.718146 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 09:21:51.718338 master-0 kubenswrapper[7385]: I0319 09:21:51.718307 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 09:21:51.718424 master-0 kubenswrapper[7385]: I0319 09:21:51.718406 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 09:21:51.719817 master-0 kubenswrapper[7385]: I0319 09:21:51.718534 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 09:21:51.719817 master-0 kubenswrapper[7385]: I0319 09:21:51.719617 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq"] Mar 19 09:21:51.721477 master-0 kubenswrapper[7385]: I0319 09:21:51.721294 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.734310 master-0 kubenswrapper[7385]: I0319 09:21:51.722874 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj"] Mar 19 09:21:51.734310 master-0 kubenswrapper[7385]: I0319 09:21:51.723957 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:51.738833 master-0 kubenswrapper[7385]: I0319 09:21:51.738790 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 19 09:21:51.739002 master-0 kubenswrapper[7385]: I0319 09:21:51.738981 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 09:21:51.739221 master-0 kubenswrapper[7385]: I0319 09:21:51.739196 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 09:21:51.739327 master-0 kubenswrapper[7385]: I0319 09:21:51.739306 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 19 09:21:51.739463 master-0 kubenswrapper[7385]: I0319 09:21:51.739442 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 19 09:21:51.739573 master-0 kubenswrapper[7385]: I0319 09:21:51.739557 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 19 09:21:51.739683 master-0 kubenswrapper[7385]: I0319 09:21:51.739669 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-nlbmt" Mar 19 09:21:51.739767 master-0 kubenswrapper[7385]: I0319 09:21:51.739754 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 09:21:51.739857 master-0 kubenswrapper[7385]: I0319 09:21:51.739845 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-xd9dt" Mar 19 09:21:51.746861 master-0 kubenswrapper[7385]: I0319 09:21:51.745822 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj"] Mar 19 09:21:51.753505 master-0 kubenswrapper[7385]: I0319 09:21:51.753053 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq"] Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778623 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778678 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778697 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp5rd\" (UniqueName: \"kubernetes.io/projected/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-kube-api-access-rp5rd\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778715 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfdkb\" (UniqueName: \"kubernetes.io/projected/14438c84-72d3-4f45-88a4-fc7e80df5fb8-kube-api-access-dfdkb\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778736 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-rootfs\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778757 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-config\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778777 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778802 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nql4h\" (UniqueName: \"kubernetes.io/projected/6ed4ce2b-080f-4523-8527-eee768e06123-kube-api-access-nql4h\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778819 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778835 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778864 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4fk\" (UniqueName: \"kubernetes.io/projected/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-kube-api-access-lr4fk\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778879 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.778895 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.780415 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-rootfs\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.781002 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.802565 master-0 kubenswrapper[7385]: I0319 09:21:51.781831 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.826108 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp5rd\" (UniqueName: \"kubernetes.io/projected/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-kube-api-access-rp5rd\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.834501 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml"] Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.835516 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.835574 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl"] Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.836165 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.839694 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl"] Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.840272 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.843638 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.846981 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl"] Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.847877 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-nst6c" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.847906 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 09:21:51.848624 master-0 kubenswrapper[7385]: I0319 09:21:51.848096 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4qjxp" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.875095 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.875291 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.875500 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.875657 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.875879 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.875979 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.876102 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 09:21:51.884564 master-0 kubenswrapper[7385]: I0319 09:21:51.876197 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-sjg6x" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884609 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4fk\" (UniqueName: \"kubernetes.io/projected/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-kube-api-access-lr4fk\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884644 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884667 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884693 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884716 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m55tp\" (UniqueName: \"kubernetes.io/projected/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-kube-api-access-m55tp\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884735 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884753 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884772 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9vh\" (UniqueName: \"kubernetes.io/projected/cef53432-93f5-4581-b3de-c8cc5cac2ecb-kube-api-access-sm9vh\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884807 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfdkb\" (UniqueName: \"kubernetes.io/projected/14438c84-72d3-4f45-88a4-fc7e80df5fb8-kube-api-access-dfdkb\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884830 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884855 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-config\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884878 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2vbp\" (UniqueName: \"kubernetes.io/projected/cd1425b9-fcd1-4aba-899f-e110eebce626-kube-api-access-s2vbp\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884911 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884929 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.884941 master-0 kubenswrapper[7385]: I0319 09:21:51.884949 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.885454 master-0 kubenswrapper[7385]: I0319 09:21:51.884980 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:51.885454 master-0 kubenswrapper[7385]: I0319 09:21:51.885059 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nql4h\" (UniqueName: \"kubernetes.io/projected/6ed4ce2b-080f-4523-8527-eee768e06123-kube-api-access-nql4h\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:51.895562 master-0 kubenswrapper[7385]: I0319 09:21:51.886012 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.895562 master-0 kubenswrapper[7385]: I0319 09:21:51.886128 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.895562 master-0 kubenswrapper[7385]: I0319 09:21:51.886186 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.895562 master-0 kubenswrapper[7385]: I0319 09:21:51.886357 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.895562 master-0 kubenswrapper[7385]: I0319 09:21:51.886403 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl"] Mar 19 09:21:51.895562 master-0 kubenswrapper[7385]: I0319 09:21:51.886701 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-config\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.895562 master-0 kubenswrapper[7385]: I0319 09:21:51.886846 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.908564 master-0 kubenswrapper[7385]: I0319 09:21:51.895955 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:51.908564 master-0 kubenswrapper[7385]: I0319 09:21:51.896317 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm"] Mar 19 09:21:51.908564 master-0 kubenswrapper[7385]: I0319 09:21:51.896425 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:21:51.908564 master-0 kubenswrapper[7385]: I0319 09:21:51.898069 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:51.908564 master-0 kubenswrapper[7385]: I0319 09:21:51.901591 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.908564 master-0 kubenswrapper[7385]: I0319 09:21:51.908422 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p2d2f" Mar 19 09:21:51.917561 master-0 kubenswrapper[7385]: I0319 09:21:51.909756 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 19 09:21:51.917561 master-0 kubenswrapper[7385]: I0319 09:21:51.909976 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 19 09:21:51.917561 master-0 kubenswrapper[7385]: I0319 09:21:51.915411 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm"] Mar 19 09:21:51.969628 master-0 kubenswrapper[7385]: I0319 09:21:51.966502 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:51.975098 master-0 kubenswrapper[7385]: I0319 09:21:51.971721 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k"] Mar 19 09:21:51.975098 master-0 kubenswrapper[7385]: I0319 09:21:51.972803 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nql4h\" (UniqueName: \"kubernetes.io/projected/6ed4ce2b-080f-4523-8527-eee768e06123-kube-api-access-nql4h\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:51.975098 master-0 kubenswrapper[7385]: I0319 09:21:51.973155 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:51.978337 master-0 kubenswrapper[7385]: I0319 09:21:51.978245 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 09:21:51.978537 master-0 kubenswrapper[7385]: I0319 09:21:51.978455 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-vxsrn" Mar 19 09:21:51.980910 master-0 kubenswrapper[7385]: I0319 09:21:51.980851 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfdkb\" (UniqueName: \"kubernetes.io/projected/14438c84-72d3-4f45-88a4-fc7e80df5fb8-kube-api-access-dfdkb\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:51.988462 master-0 kubenswrapper[7385]: I0319 09:21:51.988414 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55440bf9-0881-4823-af64-5652c2ad89ff-tmpfs\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:51.988680 master-0 kubenswrapper[7385]: I0319 09:21:51.988489 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-apiservice-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:51.988680 master-0 kubenswrapper[7385]: I0319 09:21:51.988527 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2vbp\" (UniqueName: \"kubernetes.io/projected/cd1425b9-fcd1-4aba-899f-e110eebce626-kube-api-access-s2vbp\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.988680 master-0 kubenswrapper[7385]: I0319 09:21:51.988588 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9tpx\" (UniqueName: \"kubernetes.io/projected/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-kube-api-access-s9tpx\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:51.988680 master-0 kubenswrapper[7385]: I0319 09:21:51.988614 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.988680 master-0 kubenswrapper[7385]: I0319 09:21:51.988656 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.988818 master-0 kubenswrapper[7385]: I0319 09:21:51.988684 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:51.988818 master-0 kubenswrapper[7385]: I0319 09:21:51.988743 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:51.988818 master-0 kubenswrapper[7385]: I0319 09:21:51.988782 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.996680 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-webhook-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.996735 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.996784 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjps\" (UniqueName: \"kubernetes.io/projected/55440bf9-0881-4823-af64-5652c2ad89ff-kube-api-access-gtjps\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.996841 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.996890 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m55tp\" (UniqueName: \"kubernetes.io/projected/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-kube-api-access-m55tp\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.996923 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.996956 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9vh\" (UniqueName: \"kubernetes.io/projected/cef53432-93f5-4581-b3de-c8cc5cac2ecb-kube-api-access-sm9vh\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.997003 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:51.997196 master-0 kubenswrapper[7385]: I0319 09:21:51.997054 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:51.997623 master-0 kubenswrapper[7385]: I0319 09:21:51.997237 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:51.990398 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:51.990997 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:51.996235 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k"] Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:51.999306 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:51.991577 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:52.000202 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4fk\" (UniqueName: \"kubernetes.io/projected/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-kube-api-access-lr4fk\") pod \"machine-approver-6cb57bb5db-9ll5m\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:52.001023 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:52.007900 master-0 kubenswrapper[7385]: I0319 09:21:52.001330 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:52.017452 master-0 kubenswrapper[7385]: I0319 09:21:52.013273 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:52.017452 master-0 kubenswrapper[7385]: I0319 09:21:52.016155 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:52.020415 master-0 kubenswrapper[7385]: I0319 09:21:52.020000 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2vbp\" (UniqueName: \"kubernetes.io/projected/cd1425b9-fcd1-4aba-899f-e110eebce626-kube-api-access-s2vbp\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:52.021942 master-0 kubenswrapper[7385]: I0319 09:21:52.021463 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m55tp\" (UniqueName: \"kubernetes.io/projected/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-kube-api-access-m55tp\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-vnkml\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:52.035217 master-0 kubenswrapper[7385]: I0319 09:21:52.033371 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9vh\" (UniqueName: \"kubernetes.io/projected/cef53432-93f5-4581-b3de-c8cc5cac2ecb-kube-api-access-sm9vh\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:52.035217 master-0 kubenswrapper[7385]: I0319 09:21:52.035019 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:21:52.053527 master-0 kubenswrapper[7385]: W0319 09:21:52.050728 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea94b52_7d8f_4b88_97c7_ff1a774a5f8e.slice/crio-f7c28b40cde4a7aad725d4c7e6669cdabc0febc1e8bf8d8daea1b94e0e12e828 WatchSource:0}: Error finding container f7c28b40cde4a7aad725d4c7e6669cdabc0febc1e8bf8d8daea1b94e0e12e828: Status 404 returned error can't find the container with id f7c28b40cde4a7aad725d4c7e6669cdabc0febc1e8bf8d8daea1b94e0e12e828 Mar 19 09:21:52.053527 master-0 kubenswrapper[7385]: I0319 09:21:52.051189 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:21:52.076082 master-0 kubenswrapper[7385]: I0319 09:21:52.075555 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:52.083048 master-0 kubenswrapper[7385]: I0319 09:21:52.082978 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" event={"ID":"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e","Type":"ContainerStarted","Data":"f7c28b40cde4a7aad725d4c7e6669cdabc0febc1e8bf8d8daea1b94e0e12e828"} Mar 19 09:21:52.096746 master-0 kubenswrapper[7385]: I0319 09:21:52.096599 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:21:52.099432 master-0 kubenswrapper[7385]: I0319 09:21:52.099353 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtjps\" (UniqueName: \"kubernetes.io/projected/55440bf9-0881-4823-af64-5652c2ad89ff-kube-api-access-gtjps\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.099574 master-0 kubenswrapper[7385]: I0319 09:21:52.099504 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:52.099638 master-0 kubenswrapper[7385]: I0319 09:21:52.099583 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55440bf9-0881-4823-af64-5652c2ad89ff-tmpfs\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.099675 master-0 kubenswrapper[7385]: I0319 09:21:52.099640 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-apiservice-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.099703 master-0 kubenswrapper[7385]: I0319 09:21:52.099687 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tpx\" (UniqueName: \"kubernetes.io/projected/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-kube-api-access-s9tpx\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:52.099885 master-0 kubenswrapper[7385]: I0319 09:21:52.099831 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:52.099885 master-0 kubenswrapper[7385]: I0319 09:21:52.099884 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-webhook-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.103607 master-0 kubenswrapper[7385]: I0319 09:21:52.103582 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:52.107855 master-0 kubenswrapper[7385]: I0319 09:21:52.107708 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:52.108679 master-0 kubenswrapper[7385]: I0319 09:21:52.108638 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-apiservice-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.109582 master-0 kubenswrapper[7385]: I0319 09:21:52.109488 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-webhook-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.117536 master-0 kubenswrapper[7385]: I0319 09:21:52.117203 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55440bf9-0881-4823-af64-5652c2ad89ff-tmpfs\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.120746 master-0 kubenswrapper[7385]: I0319 09:21:52.120691 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtjps\" (UniqueName: \"kubernetes.io/projected/55440bf9-0881-4823-af64-5652c2ad89ff-kube-api-access-gtjps\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.121832 master-0 kubenswrapper[7385]: I0319 09:21:52.121727 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tpx\" (UniqueName: \"kubernetes.io/projected/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-kube-api-access-s9tpx\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:52.321196 master-0 kubenswrapper[7385]: I0319 09:21:52.321154 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:21:52.369569 master-0 kubenswrapper[7385]: I0319 09:21:52.369533 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:21:52.389459 master-0 kubenswrapper[7385]: I0319 09:21:52.389424 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:52.600605 master-0 kubenswrapper[7385]: I0319 09:21:52.600266 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj"] Mar 19 09:21:52.612774 master-0 kubenswrapper[7385]: I0319 09:21:52.612628 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl"] Mar 19 09:21:52.701440 master-0 kubenswrapper[7385]: I0319 09:21:52.701364 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq"] Mar 19 09:21:52.717802 master-0 kubenswrapper[7385]: I0319 09:21:52.717746 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl"] Mar 19 09:21:52.856974 master-0 kubenswrapper[7385]: I0319 09:21:52.856847 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m"] Mar 19 09:21:52.863857 master-0 kubenswrapper[7385]: I0319 09:21:52.863751 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm"] Mar 19 09:21:52.897908 master-0 kubenswrapper[7385]: I0319 09:21:52.897709 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k"] Mar 19 09:21:52.918092 master-0 kubenswrapper[7385]: W0319 09:21:52.918049 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55440bf9_0881_4823_af64_5652c2ad89ff.slice/crio-c2c1fb4aec553af65176f49e937958c69c931605beee69d28364ee9ba795514f WatchSource:0}: Error finding container c2c1fb4aec553af65176f49e937958c69c931605beee69d28364ee9ba795514f: Status 404 returned error can't find the container with id c2c1fb4aec553af65176f49e937958c69c931605beee69d28364ee9ba795514f Mar 19 09:21:53.095853 master-0 kubenswrapper[7385]: I0319 09:21:53.095795 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" event={"ID":"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094","Type":"ContainerStarted","Data":"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913"} Mar 19 09:21:53.095853 master-0 kubenswrapper[7385]: I0319 09:21:53.095834 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" event={"ID":"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094","Type":"ContainerStarted","Data":"868ba7d1d5ae31248f8559956f57aa9fe3e7cf2a5de8ce062524bdbe8b0ff198"} Mar 19 09:21:53.111230 master-0 kubenswrapper[7385]: I0319 09:21:53.111184 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" event={"ID":"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e","Type":"ContainerStarted","Data":"400bb15e49e07af89910e200fd27d1a0059b8275a1463e6f888faf36c054efb3"} Mar 19 09:21:53.111230 master-0 kubenswrapper[7385]: I0319 09:21:53.111230 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" event={"ID":"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e","Type":"ContainerStarted","Data":"a94538e58e7c1c167ded8e990b8f0cad583b0a9074ec233435728295030c168d"} Mar 19 09:21:53.117211 master-0 kubenswrapper[7385]: I0319 09:21:53.117116 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" event={"ID":"cd1425b9-fcd1-4aba-899f-e110eebce626","Type":"ContainerStarted","Data":"1e73c045cdc4f18fc9b27c1c708f29187d7ef3af44b04953bcb3e9295fe18d4c"} Mar 19 09:21:53.117211 master-0 kubenswrapper[7385]: I0319 09:21:53.117157 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" event={"ID":"cd1425b9-fcd1-4aba-899f-e110eebce626","Type":"ContainerStarted","Data":"3a0665e823da7bfc0df78c1979cfd4c3ca72731bad4e79e2c131fc1c4139e66f"} Mar 19 09:21:53.119450 master-0 kubenswrapper[7385]: I0319 09:21:53.119423 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" event={"ID":"14438c84-72d3-4f45-88a4-fc7e80df5fb8","Type":"ContainerStarted","Data":"ef04aa85db442add04f4bc95078f7250f88a8b171f896036086838d5c3039b86"} Mar 19 09:21:53.119450 master-0 kubenswrapper[7385]: I0319 09:21:53.119450 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" event={"ID":"14438c84-72d3-4f45-88a4-fc7e80df5fb8","Type":"ContainerStarted","Data":"ba9f914f103017d6ef2cf2c16d508f5302ad218dbd57c88fe26f6d74473e9036"} Mar 19 09:21:53.128426 master-0 kubenswrapper[7385]: I0319 09:21:53.127882 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" event={"ID":"55440bf9-0881-4823-af64-5652c2ad89ff","Type":"ContainerStarted","Data":"e5a7ff1e640478e36ab12b4b51fd51ee7aaa3892f036b66c9418cdc588759fd5"} Mar 19 09:21:53.128426 master-0 kubenswrapper[7385]: I0319 09:21:53.127924 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" event={"ID":"55440bf9-0881-4823-af64-5652c2ad89ff","Type":"ContainerStarted","Data":"c2c1fb4aec553af65176f49e937958c69c931605beee69d28364ee9ba795514f"} Mar 19 09:21:53.128426 master-0 kubenswrapper[7385]: I0319 09:21:53.128322 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:53.132245 master-0 kubenswrapper[7385]: I0319 09:21:53.132056 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" event={"ID":"6ed4ce2b-080f-4523-8527-eee768e06123","Type":"ContainerStarted","Data":"5587303dfbff2e0f6e8f88f34bf2533361126f22ec3322ef362bf2e083f2b5d9"} Mar 19 09:21:53.140800 master-0 kubenswrapper[7385]: I0319 09:21:53.140732 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" podStartSLOduration=2.140696937 podStartE2EDuration="2.140696937s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:21:53.126029447 +0000 UTC m=+208.800459148" watchObservedRunningTime="2026-03-19 09:21:53.140696937 +0000 UTC m=+208.815126658" Mar 19 09:21:53.150354 master-0 kubenswrapper[7385]: I0319 09:21:53.150309 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" event={"ID":"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8","Type":"ContainerStarted","Data":"e410ee8894613806def4cf6b6f43112e26301675993c21583e60f54bf70df860"} Mar 19 09:21:53.150501 master-0 kubenswrapper[7385]: I0319 09:21:53.150365 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" event={"ID":"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8","Type":"ContainerStarted","Data":"95f6a209ef68dab4cb5672857aeba51bebac9f6d112d21c7fcd718cb5be803c7"} Mar 19 09:21:53.152076 master-0 kubenswrapper[7385]: I0319 09:21:53.152025 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" podStartSLOduration=2.152009837 podStartE2EDuration="2.152009837s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:21:53.148937257 +0000 UTC m=+208.823366968" watchObservedRunningTime="2026-03-19 09:21:53.152009837 +0000 UTC m=+208.826439538" Mar 19 09:21:53.152380 master-0 kubenswrapper[7385]: I0319 09:21:53.152351 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerStarted","Data":"1a5f6410bd15dc4c89480d862762a66dd97ba2b472dec615d9046187f9e50b22"} Mar 19 09:21:53.154355 master-0 kubenswrapper[7385]: I0319 09:21:53.154333 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" event={"ID":"cef53432-93f5-4581-b3de-c8cc5cac2ecb","Type":"ContainerStarted","Data":"093a8e850a736d3eca3797467a6bc2ecea1fef6e909d2da61102bdda8dc94887"} Mar 19 09:21:53.525750 master-0 kubenswrapper[7385]: I0319 09:21:53.525653 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:21:56.236249 master-0 kubenswrapper[7385]: I0319 09:21:56.235907 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp"] Mar 19 09:21:56.238056 master-0 kubenswrapper[7385]: I0319 09:21:56.238034 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.240725 master-0 kubenswrapper[7385]: I0319 09:21:56.240484 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-fxmqq" Mar 19 09:21:56.240725 master-0 kubenswrapper[7385]: I0319 09:21:56.240703 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 09:21:56.255103 master-0 kubenswrapper[7385]: I0319 09:21:56.254990 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp"] Mar 19 09:21:56.270082 master-0 kubenswrapper[7385]: I0319 09:21:56.266101 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.270082 master-0 kubenswrapper[7385]: I0319 09:21:56.266151 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e5534b-f428-45cf-b54e-d06b25dc3e09-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.270082 master-0 kubenswrapper[7385]: I0319 09:21:56.266171 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45nc\" (UniqueName: \"kubernetes.io/projected/67e5534b-f428-45cf-b54e-d06b25dc3e09-kube-api-access-s45nc\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.367505 master-0 kubenswrapper[7385]: I0319 09:21:56.367454 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e5534b-f428-45cf-b54e-d06b25dc3e09-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.367725 master-0 kubenswrapper[7385]: I0319 09:21:56.367517 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45nc\" (UniqueName: \"kubernetes.io/projected/67e5534b-f428-45cf-b54e-d06b25dc3e09-kube-api-access-s45nc\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.367725 master-0 kubenswrapper[7385]: I0319 09:21:56.367641 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.368392 master-0 kubenswrapper[7385]: I0319 09:21:56.368341 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e5534b-f428-45cf-b54e-d06b25dc3e09-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.373026 master-0 kubenswrapper[7385]: I0319 09:21:56.373001 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.382595 master-0 kubenswrapper[7385]: I0319 09:21:56.382566 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45nc\" (UniqueName: \"kubernetes.io/projected/67e5534b-f428-45cf-b54e-d06b25dc3e09-kube-api-access-s45nc\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:56.559506 master-0 kubenswrapper[7385]: I0319 09:21:56.559411 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:21:58.773339 master-0 kubenswrapper[7385]: I0319 09:21:58.773212 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml"] Mar 19 09:21:58.991965 master-0 kubenswrapper[7385]: I0319 09:21:58.991925 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp"] Mar 19 09:21:59.219927 master-0 kubenswrapper[7385]: I0319 09:21:59.219886 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerStarted","Data":"6fd6080616c60415dc0f2dfc5aa404bff94ac07e13ce0f4b9e4c32835968ac86"} Mar 19 09:21:59.227394 master-0 kubenswrapper[7385]: I0319 09:21:59.227309 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" event={"ID":"cef53432-93f5-4581-b3de-c8cc5cac2ecb","Type":"ContainerStarted","Data":"bc9135aad8b62aff6fca98f88f979a784539469fc0e4b4ef505d6e449c8e8562"} Mar 19 09:21:59.239344 master-0 kubenswrapper[7385]: I0319 09:21:59.239113 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" event={"ID":"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094","Type":"ContainerStarted","Data":"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4"} Mar 19 09:21:59.239344 master-0 kubenswrapper[7385]: I0319 09:21:59.239282 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="kube-rbac-proxy" containerID="cri-o://69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913" gracePeriod=30 Mar 19 09:21:59.239566 master-0 kubenswrapper[7385]: I0319 09:21:59.239359 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="machine-approver-controller" containerID="cri-o://63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4" gracePeriod=30 Mar 19 09:21:59.258596 master-0 kubenswrapper[7385]: I0319 09:21:59.258509 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" podStartSLOduration=2.285093265 podStartE2EDuration="8.258489366s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:52.624287581 +0000 UTC m=+208.298717282" lastFinishedPulling="2026-03-19 09:21:58.597683672 +0000 UTC m=+214.272113383" observedRunningTime="2026-03-19 09:21:59.258199497 +0000 UTC m=+214.932629218" watchObservedRunningTime="2026-03-19 09:21:59.258489366 +0000 UTC m=+214.932919077" Mar 19 09:21:59.260256 master-0 kubenswrapper[7385]: I0319 09:21:59.260221 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" event={"ID":"6ed4ce2b-080f-4523-8527-eee768e06123","Type":"ContainerStarted","Data":"327e853951b2d3859df0668e5fd700537932666bde64680cdc7ebf33b5ba3cd2"} Mar 19 09:21:59.260435 master-0 kubenswrapper[7385]: I0319 09:21:59.260273 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" event={"ID":"6ed4ce2b-080f-4523-8527-eee768e06123","Type":"ContainerStarted","Data":"0183e1bce77c117d90f17190794c126404c096af3cca38a1d89a823e089ab0b0"} Mar 19 09:21:59.264726 master-0 kubenswrapper[7385]: I0319 09:21:59.264685 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" event={"ID":"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8","Type":"ContainerStarted","Data":"d53ad972361319c74f326b3096df26b027816cd81f61b8b72dac0988e8a98e3b"} Mar 19 09:21:59.267125 master-0 kubenswrapper[7385]: I0319 09:21:59.267091 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" event={"ID":"67e5534b-f428-45cf-b54e-d06b25dc3e09","Type":"ContainerStarted","Data":"d06b72c6f0371c1b0257ad61f4ae8d069961f5af58fd20925966cfc79d79903d"} Mar 19 09:21:59.303663 master-0 kubenswrapper[7385]: I0319 09:21:59.303430 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" podStartSLOduration=2.398963943 podStartE2EDuration="8.303412537s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:52.685502755 +0000 UTC m=+208.359932456" lastFinishedPulling="2026-03-19 09:21:58.589951349 +0000 UTC m=+214.264381050" observedRunningTime="2026-03-19 09:21:59.301823025 +0000 UTC m=+214.976252746" watchObservedRunningTime="2026-03-19 09:21:59.303412537 +0000 UTC m=+214.977842248" Mar 19 09:21:59.306656 master-0 kubenswrapper[7385]: I0319 09:21:59.304244 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" podStartSLOduration=2.365033142 podStartE2EDuration="8.304237344s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:52.649682052 +0000 UTC m=+208.324111753" lastFinishedPulling="2026-03-19 09:21:58.588886254 +0000 UTC m=+214.263315955" observedRunningTime="2026-03-19 09:21:59.277059004 +0000 UTC m=+214.951488725" watchObservedRunningTime="2026-03-19 09:21:59.304237344 +0000 UTC m=+214.978667045" Mar 19 09:21:59.343431 master-0 kubenswrapper[7385]: I0319 09:21:59.342697 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" podStartSLOduration=2.819822381 podStartE2EDuration="8.342671242s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:53.068711731 +0000 UTC m=+208.743141432" lastFinishedPulling="2026-03-19 09:21:58.591560592 +0000 UTC m=+214.265990293" observedRunningTime="2026-03-19 09:21:59.333230353 +0000 UTC m=+215.007660064" watchObservedRunningTime="2026-03-19 09:21:59.342671242 +0000 UTC m=+215.017100963" Mar 19 09:21:59.363649 master-0 kubenswrapper[7385]: I0319 09:21:59.363170 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd"] Mar 19 09:21:59.366933 master-0 kubenswrapper[7385]: I0319 09:21:59.364200 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:21:59.370680 master-0 kubenswrapper[7385]: I0319 09:21:59.366877 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-t4gft" Mar 19 09:21:59.370680 master-0 kubenswrapper[7385]: I0319 09:21:59.369850 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns"] Mar 19 09:21:59.370902 master-0 kubenswrapper[7385]: I0319 09:21:59.370808 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" Mar 19 09:21:59.374556 master-0 kubenswrapper[7385]: I0319 09:21:59.371082 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 09:21:59.374556 master-0 kubenswrapper[7385]: I0319 09:21:59.374058 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-k99cg"] Mar 19 09:21:59.376355 master-0 kubenswrapper[7385]: I0319 09:21:59.374985 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.378204 master-0 kubenswrapper[7385]: I0319 09:21:59.377502 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 09:21:59.381769 master-0 kubenswrapper[7385]: I0319 09:21:59.380268 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 09:21:59.381769 master-0 kubenswrapper[7385]: I0319 09:21:59.380341 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-cn6b2" Mar 19 09:21:59.381769 master-0 kubenswrapper[7385]: I0319 09:21:59.380512 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 09:21:59.381769 master-0 kubenswrapper[7385]: I0319 09:21:59.380724 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 09:21:59.381769 master-0 kubenswrapper[7385]: I0319 09:21:59.380880 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 09:21:59.381769 master-0 kubenswrapper[7385]: I0319 09:21:59.381113 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 09:21:59.385977 master-0 kubenswrapper[7385]: I0319 09:21:59.385187 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd"] Mar 19 09:21:59.393765 master-0 kubenswrapper[7385]: I0319 09:21:59.393579 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns"] Mar 19 09:21:59.415214 master-0 kubenswrapper[7385]: I0319 09:21:59.414042 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbvbr\" (UniqueName: \"kubernetes.io/projected/b42aee2f-bffc-4c43-bf20-16d9c67d216c-kube-api-access-lbvbr\") pod \"network-check-source-b4bf74f6-tk6ns\" (UID: \"b42aee2f-bffc-4c43-bf20-16d9c67d216c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" Mar 19 09:21:59.415214 master-0 kubenswrapper[7385]: I0319 09:21:59.414133 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flln7\" (UniqueName: \"kubernetes.io/projected/57227a66-c758-4a46-a5e1-f603baa3f570-kube-api-access-flln7\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.415214 master-0 kubenswrapper[7385]: I0319 09:21:59.414158 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.415214 master-0 kubenswrapper[7385]: I0319 09:21:59.414213 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.415214 master-0 kubenswrapper[7385]: I0319 09:21:59.414354 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.415214 master-0 kubenswrapper[7385]: I0319 09:21:59.414470 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-fngzd\" (UID: \"0adaea87-67d0-41a7-a1f3-855fdd483aca\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:21:59.415214 master-0 kubenswrapper[7385]: I0319 09:21:59.414505 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.452760 master-0 kubenswrapper[7385]: I0319 09:21:59.452229 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:21:59.515587 master-0 kubenswrapper[7385]: I0319 09:21:59.515556 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-machine-approver-tls\") pod \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " Mar 19 09:21:59.515651 master-0 kubenswrapper[7385]: I0319 09:21:59.515639 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr4fk\" (UniqueName: \"kubernetes.io/projected/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-kube-api-access-lr4fk\") pod \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " Mar 19 09:21:59.515728 master-0 kubenswrapper[7385]: I0319 09:21:59.515710 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-auth-proxy-config\") pod \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " Mar 19 09:21:59.515770 master-0 kubenswrapper[7385]: I0319 09:21:59.515757 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-config\") pod \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\" (UID: \"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094\") " Mar 19 09:21:59.515949 master-0 kubenswrapper[7385]: I0319 09:21:59.515932 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbvbr\" (UniqueName: \"kubernetes.io/projected/b42aee2f-bffc-4c43-bf20-16d9c67d216c-kube-api-access-lbvbr\") pod \"network-check-source-b4bf74f6-tk6ns\" (UID: \"b42aee2f-bffc-4c43-bf20-16d9c67d216c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" Mar 19 09:21:59.516012 master-0 kubenswrapper[7385]: I0319 09:21:59.515964 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flln7\" (UniqueName: \"kubernetes.io/projected/57227a66-c758-4a46-a5e1-f603baa3f570-kube-api-access-flln7\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.516012 master-0 kubenswrapper[7385]: I0319 09:21:59.515986 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.516012 master-0 kubenswrapper[7385]: I0319 09:21:59.516010 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.516105 master-0 kubenswrapper[7385]: I0319 09:21:59.516044 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.516139 master-0 kubenswrapper[7385]: I0319 09:21:59.516101 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-fngzd\" (UID: \"0adaea87-67d0-41a7-a1f3-855fdd483aca\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:21:59.517223 master-0 kubenswrapper[7385]: I0319 09:21:59.517188 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-config" (OuterVolumeSpecName: "config") pod "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" (UID: "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:21:59.517733 master-0 kubenswrapper[7385]: I0319 09:21:59.517699 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.517824 master-0 kubenswrapper[7385]: I0319 09:21:59.517801 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.518213 master-0 kubenswrapper[7385]: I0319 09:21:59.518192 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:21:59.522173 master-0 kubenswrapper[7385]: I0319 09:21:59.522124 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" (UID: "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:21:59.523423 master-0 kubenswrapper[7385]: I0319 09:21:59.522963 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-fngzd\" (UID: \"0adaea87-67d0-41a7-a1f3-855fdd483aca\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:21:59.523852 master-0 kubenswrapper[7385]: I0319 09:21:59.523827 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" (UID: "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:21:59.524135 master-0 kubenswrapper[7385]: I0319 09:21:59.524107 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.525982 master-0 kubenswrapper[7385]: I0319 09:21:59.525953 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.531892 master-0 kubenswrapper[7385]: I0319 09:21:59.531857 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-kube-api-access-lr4fk" (OuterVolumeSpecName: "kube-api-access-lr4fk") pod "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" (UID: "a7b69c61-1dd7-47ac-97b8-d4cf2bca6094"). InnerVolumeSpecName "kube-api-access-lr4fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:21:59.536580 master-0 kubenswrapper[7385]: I0319 09:21:59.534255 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.536759 master-0 kubenswrapper[7385]: I0319 09:21:59.536707 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbvbr\" (UniqueName: \"kubernetes.io/projected/b42aee2f-bffc-4c43-bf20-16d9c67d216c-kube-api-access-lbvbr\") pod \"network-check-source-b4bf74f6-tk6ns\" (UID: \"b42aee2f-bffc-4c43-bf20-16d9c67d216c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" Mar 19 09:21:59.538706 master-0 kubenswrapper[7385]: I0319 09:21:59.538679 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flln7\" (UniqueName: \"kubernetes.io/projected/57227a66-c758-4a46-a5e1-f603baa3f570-kube-api-access-flln7\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.621185 master-0 kubenswrapper[7385]: I0319 09:21:59.621149 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr4fk\" (UniqueName: \"kubernetes.io/projected/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-kube-api-access-lr4fk\") on node \"master-0\" DevicePath \"\"" Mar 19 09:21:59.621185 master-0 kubenswrapper[7385]: I0319 09:21:59.621180 7385 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:21:59.621185 master-0 kubenswrapper[7385]: I0319 09:21:59.621191 7385 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 19 09:21:59.697648 master-0 kubenswrapper[7385]: I0319 09:21:59.697589 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:21:59.710500 master-0 kubenswrapper[7385]: I0319 09:21:59.710254 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" Mar 19 09:21:59.728049 master-0 kubenswrapper[7385]: I0319 09:21:59.728006 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:21:59.782338 master-0 kubenswrapper[7385]: W0319 09:21:59.782286 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57227a66_c758_4a46_a5e1_f603baa3f570.slice/crio-757166b43c0c56e8283c67b367d970d37bc2cba347814ca1a8d85ab635b22caa WatchSource:0}: Error finding container 757166b43c0c56e8283c67b367d970d37bc2cba347814ca1a8d85ab635b22caa: Status 404 returned error can't find the container with id 757166b43c0c56e8283c67b367d970d37bc2cba347814ca1a8d85ab635b22caa Mar 19 09:22:00.129492 master-0 kubenswrapper[7385]: I0319 09:22:00.129445 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd"] Mar 19 09:22:00.210662 master-0 kubenswrapper[7385]: I0319 09:22:00.210618 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns"] Mar 19 09:22:00.276423 master-0 kubenswrapper[7385]: I0319 09:22:00.276369 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerStarted","Data":"757166b43c0c56e8283c67b367d970d37bc2cba347814ca1a8d85ab635b22caa"} Mar 19 09:22:00.279016 master-0 kubenswrapper[7385]: I0319 09:22:00.278984 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" event={"ID":"67e5534b-f428-45cf-b54e-d06b25dc3e09","Type":"ContainerStarted","Data":"c9951e834eac9fa8b70d5e1fa9bb37afc3d9012f0b6806bedca4371ec18ecd3e"} Mar 19 09:22:00.279016 master-0 kubenswrapper[7385]: I0319 09:22:00.279019 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" event={"ID":"67e5534b-f428-45cf-b54e-d06b25dc3e09","Type":"ContainerStarted","Data":"d5770bd81e364bc5873efaef75622b8bd010e5cbf7169222b8a31abb0223949a"} Mar 19 09:22:00.282466 master-0 kubenswrapper[7385]: I0319 09:22:00.282432 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerStarted","Data":"9d93731d12325d249433b098d7525f5e58d5658366b38d62bbea3cb61acc6b4a"} Mar 19 09:22:00.282606 master-0 kubenswrapper[7385]: I0319 09:22:00.282471 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerStarted","Data":"7f838ca4a31c3116568dc15531f5add527e92f8f330422d67816ce564c3aab44"} Mar 19 09:22:00.282606 master-0 kubenswrapper[7385]: I0319 09:22:00.282598 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="cluster-cloud-controller-manager" containerID="cri-o://6fd6080616c60415dc0f2dfc5aa404bff94ac07e13ce0f4b9e4c32835968ac86" gracePeriod=30 Mar 19 09:22:00.282701 master-0 kubenswrapper[7385]: I0319 09:22:00.282676 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="kube-rbac-proxy" containerID="cri-o://9d93731d12325d249433b098d7525f5e58d5658366b38d62bbea3cb61acc6b4a" gracePeriod=30 Mar 19 09:22:00.289385 master-0 kubenswrapper[7385]: I0319 09:22:00.282726 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="config-sync-controllers" containerID="cri-o://7f838ca4a31c3116568dc15531f5add527e92f8f330422d67816ce564c3aab44" gracePeriod=30 Mar 19 09:22:00.289385 master-0 kubenswrapper[7385]: I0319 09:22:00.288797 7385 generic.go:334] "Generic (PLEG): container finished" podID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerID="63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4" exitCode=0 Mar 19 09:22:00.289385 master-0 kubenswrapper[7385]: I0319 09:22:00.288828 7385 generic.go:334] "Generic (PLEG): container finished" podID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerID="69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913" exitCode=0 Mar 19 09:22:00.292770 master-0 kubenswrapper[7385]: I0319 09:22:00.289514 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" Mar 19 09:22:00.292770 master-0 kubenswrapper[7385]: I0319 09:22:00.289981 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" event={"ID":"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094","Type":"ContainerDied","Data":"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4"} Mar 19 09:22:00.292770 master-0 kubenswrapper[7385]: I0319 09:22:00.290090 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" event={"ID":"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094","Type":"ContainerDied","Data":"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913"} Mar 19 09:22:00.292770 master-0 kubenswrapper[7385]: I0319 09:22:00.290104 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m" event={"ID":"a7b69c61-1dd7-47ac-97b8-d4cf2bca6094","Type":"ContainerDied","Data":"868ba7d1d5ae31248f8559956f57aa9fe3e7cf2a5de8ce062524bdbe8b0ff198"} Mar 19 09:22:00.292770 master-0 kubenswrapper[7385]: I0319 09:22:00.290121 7385 scope.go:117] "RemoveContainer" containerID="63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4" Mar 19 09:22:00.305477 master-0 kubenswrapper[7385]: I0319 09:22:00.304864 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" podStartSLOduration=4.304843383 podStartE2EDuration="4.304843383s" podCreationTimestamp="2026-03-19 09:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:22:00.300996316 +0000 UTC m=+215.975426037" watchObservedRunningTime="2026-03-19 09:22:00.304843383 +0000 UTC m=+215.979273104" Mar 19 09:22:00.329908 master-0 kubenswrapper[7385]: I0319 09:22:00.328137 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" podStartSLOduration=3.045834482 podStartE2EDuration="9.328117875s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:52.368496447 +0000 UTC m=+208.042926148" lastFinishedPulling="2026-03-19 09:21:58.65077984 +0000 UTC m=+214.325209541" observedRunningTime="2026-03-19 09:22:00.325966454 +0000 UTC m=+216.000396175" watchObservedRunningTime="2026-03-19 09:22:00.328117875 +0000 UTC m=+216.002547586" Mar 19 09:22:00.375512 master-0 kubenswrapper[7385]: I0319 09:22:00.374828 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m"] Mar 19 09:22:00.382317 master-0 kubenswrapper[7385]: I0319 09:22:00.382215 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-9ll5m"] Mar 19 09:22:00.411018 master-0 kubenswrapper[7385]: I0319 09:22:00.410969 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h"] Mar 19 09:22:00.411647 master-0 kubenswrapper[7385]: E0319 09:22:00.411221 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="machine-approver-controller" Mar 19 09:22:00.411647 master-0 kubenswrapper[7385]: I0319 09:22:00.411255 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="machine-approver-controller" Mar 19 09:22:00.411647 master-0 kubenswrapper[7385]: E0319 09:22:00.411269 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="kube-rbac-proxy" Mar 19 09:22:00.411647 master-0 kubenswrapper[7385]: I0319 09:22:00.411278 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="kube-rbac-proxy" Mar 19 09:22:00.411647 master-0 kubenswrapper[7385]: I0319 09:22:00.411502 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="machine-approver-controller" Mar 19 09:22:00.411647 master-0 kubenswrapper[7385]: I0319 09:22:00.411524 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" containerName="kube-rbac-proxy" Mar 19 09:22:00.413025 master-0 kubenswrapper[7385]: I0319 09:22:00.413001 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.414647 master-0 kubenswrapper[7385]: I0319 09:22:00.414606 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 09:22:00.416234 master-0 kubenswrapper[7385]: I0319 09:22:00.416071 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 09:22:00.416234 master-0 kubenswrapper[7385]: I0319 09:22:00.416190 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vkwb4" Mar 19 09:22:00.416336 master-0 kubenswrapper[7385]: I0319 09:22:00.416264 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 09:22:00.416336 master-0 kubenswrapper[7385]: I0319 09:22:00.416293 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 09:22:00.417504 master-0 kubenswrapper[7385]: I0319 09:22:00.417459 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 09:22:00.432989 master-0 kubenswrapper[7385]: I0319 09:22:00.432956 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2ng\" (UniqueName: \"kubernetes.io/projected/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-kube-api-access-7g2ng\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.433114 master-0 kubenswrapper[7385]: I0319 09:22:00.433016 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.433114 master-0 kubenswrapper[7385]: I0319 09:22:00.433063 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.433114 master-0 kubenswrapper[7385]: I0319 09:22:00.433101 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.534906 master-0 kubenswrapper[7385]: I0319 09:22:00.534822 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.535085 master-0 kubenswrapper[7385]: I0319 09:22:00.534978 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2ng\" (UniqueName: \"kubernetes.io/projected/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-kube-api-access-7g2ng\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.535134 master-0 kubenswrapper[7385]: I0319 09:22:00.535086 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.535134 master-0 kubenswrapper[7385]: I0319 09:22:00.535087 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.535202 master-0 kubenswrapper[7385]: I0319 09:22:00.535165 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.535868 master-0 kubenswrapper[7385]: I0319 09:22:00.535837 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.543674 master-0 kubenswrapper[7385]: I0319 09:22:00.543586 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7b69c61-1dd7-47ac-97b8-d4cf2bca6094" path="/var/lib/kubelet/pods/a7b69c61-1dd7-47ac-97b8-d4cf2bca6094/volumes" Mar 19 09:22:00.546907 master-0 kubenswrapper[7385]: I0319 09:22:00.546878 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.553026 master-0 kubenswrapper[7385]: I0319 09:22:00.552990 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2ng\" (UniqueName: \"kubernetes.io/projected/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-kube-api-access-7g2ng\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:00.733501 master-0 kubenswrapper[7385]: I0319 09:22:00.733398 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:22:02.001410 master-0 kubenswrapper[7385]: I0319 09:22:02.001380 7385 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 09:22:02.072828 master-0 kubenswrapper[7385]: I0319 09:22:02.072762 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:22:03.875894 master-0 kubenswrapper[7385]: I0319 09:22:03.875852 7385 generic.go:334] "Generic (PLEG): container finished" podID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerID="9d93731d12325d249433b098d7525f5e58d5658366b38d62bbea3cb61acc6b4a" exitCode=0 Mar 19 09:22:03.876500 master-0 kubenswrapper[7385]: I0319 09:22:03.876482 7385 generic.go:334] "Generic (PLEG): container finished" podID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerID="7f838ca4a31c3116568dc15531f5add527e92f8f330422d67816ce564c3aab44" exitCode=0 Mar 19 09:22:03.876665 master-0 kubenswrapper[7385]: I0319 09:22:03.876647 7385 generic.go:334] "Generic (PLEG): container finished" podID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerID="6fd6080616c60415dc0f2dfc5aa404bff94ac07e13ce0f4b9e4c32835968ac86" exitCode=0 Mar 19 09:22:03.876756 master-0 kubenswrapper[7385]: I0319 09:22:03.876492 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerDied","Data":"9d93731d12325d249433b098d7525f5e58d5658366b38d62bbea3cb61acc6b4a"} Mar 19 09:22:03.876863 master-0 kubenswrapper[7385]: I0319 09:22:03.876842 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerDied","Data":"7f838ca4a31c3116568dc15531f5add527e92f8f330422d67816ce564c3aab44"} Mar 19 09:22:03.876952 master-0 kubenswrapper[7385]: I0319 09:22:03.876934 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerDied","Data":"6fd6080616c60415dc0f2dfc5aa404bff94ac07e13ce0f4b9e4c32835968ac86"} Mar 19 09:22:04.524103 master-0 kubenswrapper[7385]: I0319 09:22:04.524045 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nsnqt"] Mar 19 09:22:04.524950 master-0 kubenswrapper[7385]: I0319 09:22:04.524892 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.526464 master-0 kubenswrapper[7385]: I0319 09:22:04.526424 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-szhzh" Mar 19 09:22:04.527115 master-0 kubenswrapper[7385]: I0319 09:22:04.527078 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 09:22:04.527284 master-0 kubenswrapper[7385]: I0319 09:22:04.527268 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 09:22:04.683106 master-0 kubenswrapper[7385]: I0319 09:22:04.683063 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.683308 master-0 kubenswrapper[7385]: I0319 09:22:04.683139 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.683722 master-0 kubenswrapper[7385]: I0319 09:22:04.683689 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqts\" (UniqueName: \"kubernetes.io/projected/e0491730-604c-4a66-b827-458da88d262b-kube-api-access-gmqts\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.792650 master-0 kubenswrapper[7385]: I0319 09:22:04.792509 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.792650 master-0 kubenswrapper[7385]: I0319 09:22:04.792596 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqts\" (UniqueName: \"kubernetes.io/projected/e0491730-604c-4a66-b827-458da88d262b-kube-api-access-gmqts\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.792891 master-0 kubenswrapper[7385]: I0319 09:22:04.792827 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.795670 master-0 kubenswrapper[7385]: I0319 09:22:04.795631 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.802696 master-0 kubenswrapper[7385]: I0319 09:22:04.802653 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.821975 master-0 kubenswrapper[7385]: I0319 09:22:04.821941 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqts\" (UniqueName: \"kubernetes.io/projected/e0491730-604c-4a66-b827-458da88d262b-kube-api-access-gmqts\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:04.845047 master-0 kubenswrapper[7385]: I0319 09:22:04.845010 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:22:06.905304 master-0 kubenswrapper[7385]: I0319 09:22:06.905216 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" event={"ID":"0adaea87-67d0-41a7-a1f3-855fdd483aca","Type":"ContainerStarted","Data":"611f0577f694d16ae6cfdfa887a45e57816d4fedaa4b7733f18258fff60747d7"} Mar 19 09:22:06.907100 master-0 kubenswrapper[7385]: I0319 09:22:06.907047 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" event={"ID":"b42aee2f-bffc-4c43-bf20-16d9c67d216c","Type":"ContainerStarted","Data":"a920827df943f06d02da8e8ea819eda5fb31c3dfefaa7f8b86842839ee17dd17"} Mar 19 09:22:13.439451 master-0 kubenswrapper[7385]: I0319 09:22:13.439408 7385 scope.go:117] "RemoveContainer" containerID="69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913" Mar 19 09:22:14.769162 master-0 kubenswrapper[7385]: I0319 09:22:14.769046 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:22:14.846742 master-0 kubenswrapper[7385]: I0319 09:22:14.846696 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-cloud-controller-manager-operator-tls\") pod \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " Mar 19 09:22:14.847466 master-0 kubenswrapper[7385]: I0319 09:22:14.847384 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-auth-proxy-config\") pod \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " Mar 19 09:22:14.847466 master-0 kubenswrapper[7385]: I0319 09:22:14.847446 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m55tp\" (UniqueName: \"kubernetes.io/projected/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-kube-api-access-m55tp\") pod \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " Mar 19 09:22:14.848215 master-0 kubenswrapper[7385]: I0319 09:22:14.848170 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-images\") pod \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " Mar 19 09:22:14.848310 master-0 kubenswrapper[7385]: I0319 09:22:14.848172 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" (UID: "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:22:14.848352 master-0 kubenswrapper[7385]: I0319 09:22:14.848308 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" (UID: "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:22:14.848393 master-0 kubenswrapper[7385]: I0319 09:22:14.848275 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-host-etc-kube\") pod \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\" (UID: \"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5\") " Mar 19 09:22:14.848715 master-0 kubenswrapper[7385]: I0319 09:22:14.848670 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-images" (OuterVolumeSpecName: "images") pod "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" (UID: "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:22:14.848872 master-0 kubenswrapper[7385]: I0319 09:22:14.848843 7385 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:22:14.848928 master-0 kubenswrapper[7385]: I0319 09:22:14.848874 7385 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-images\") on node \"master-0\" DevicePath \"\"" Mar 19 09:22:14.848928 master-0 kubenswrapper[7385]: I0319 09:22:14.848888 7385 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 19 09:22:14.850417 master-0 kubenswrapper[7385]: I0319 09:22:14.850359 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" (UID: "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:22:14.864785 master-0 kubenswrapper[7385]: I0319 09:22:14.864751 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-kube-api-access-m55tp" (OuterVolumeSpecName: "kube-api-access-m55tp") pod "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" (UID: "a19a9c80-6289-4a07-bc4a-b1f87a2e96b5"). InnerVolumeSpecName "kube-api-access-m55tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:22:14.950615 master-0 kubenswrapper[7385]: I0319 09:22:14.950453 7385 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 19 09:22:14.950615 master-0 kubenswrapper[7385]: I0319 09:22:14.950504 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m55tp\" (UniqueName: \"kubernetes.io/projected/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5-kube-api-access-m55tp\") on node \"master-0\" DevicePath \"\"" Mar 19 09:22:14.953179 master-0 kubenswrapper[7385]: I0319 09:22:14.953133 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" event={"ID":"a19a9c80-6289-4a07-bc4a-b1f87a2e96b5","Type":"ContainerDied","Data":"1a5f6410bd15dc4c89480d862762a66dd97ba2b472dec615d9046187f9e50b22"} Mar 19 09:22:14.953256 master-0 kubenswrapper[7385]: I0319 09:22:14.953237 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml" Mar 19 09:22:15.002981 master-0 kubenswrapper[7385]: I0319 09:22:15.002911 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml"] Mar 19 09:22:15.005761 master-0 kubenswrapper[7385]: I0319 09:22:15.005696 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-vnkml"] Mar 19 09:22:15.027432 master-0 kubenswrapper[7385]: I0319 09:22:15.027391 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2"] Mar 19 09:22:15.027994 master-0 kubenswrapper[7385]: E0319 09:22:15.027977 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="cluster-cloud-controller-manager" Mar 19 09:22:15.028121 master-0 kubenswrapper[7385]: I0319 09:22:15.028107 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="cluster-cloud-controller-manager" Mar 19 09:22:15.028222 master-0 kubenswrapper[7385]: E0319 09:22:15.028208 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="kube-rbac-proxy" Mar 19 09:22:15.028334 master-0 kubenswrapper[7385]: I0319 09:22:15.028319 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="kube-rbac-proxy" Mar 19 09:22:15.028427 master-0 kubenswrapper[7385]: E0319 09:22:15.028414 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="config-sync-controllers" Mar 19 09:22:15.028501 master-0 kubenswrapper[7385]: I0319 09:22:15.028489 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="config-sync-controllers" Mar 19 09:22:15.028794 master-0 kubenswrapper[7385]: I0319 09:22:15.028778 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="cluster-cloud-controller-manager" Mar 19 09:22:15.028930 master-0 kubenswrapper[7385]: I0319 09:22:15.028917 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="config-sync-controllers" Mar 19 09:22:15.029082 master-0 kubenswrapper[7385]: I0319 09:22:15.029069 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" containerName="kube-rbac-proxy" Mar 19 09:22:15.030750 master-0 kubenswrapper[7385]: I0319 09:22:15.030729 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.032875 master-0 kubenswrapper[7385]: I0319 09:22:15.032810 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 09:22:15.033191 master-0 kubenswrapper[7385]: I0319 09:22:15.033163 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 09:22:15.033323 master-0 kubenswrapper[7385]: I0319 09:22:15.033300 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:22:15.034107 master-0 kubenswrapper[7385]: I0319 09:22:15.034067 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4qjxp" Mar 19 09:22:15.034165 master-0 kubenswrapper[7385]: I0319 09:22:15.034128 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 09:22:15.034402 master-0 kubenswrapper[7385]: I0319 09:22:15.034355 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:22:15.051788 master-0 kubenswrapper[7385]: I0319 09:22:15.051711 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.051788 master-0 kubenswrapper[7385]: I0319 09:22:15.051764 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.051788 master-0 kubenswrapper[7385]: I0319 09:22:15.051786 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c3610f08-aba1-411d-aa6d-811b88acdb7b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.052095 master-0 kubenswrapper[7385]: I0319 09:22:15.051844 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.052095 master-0 kubenswrapper[7385]: I0319 09:22:15.051883 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdgvx\" (UniqueName: \"kubernetes.io/projected/c3610f08-aba1-411d-aa6d-811b88acdb7b-kube-api-access-jdgvx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153066 master-0 kubenswrapper[7385]: I0319 09:22:15.152553 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdgvx\" (UniqueName: \"kubernetes.io/projected/c3610f08-aba1-411d-aa6d-811b88acdb7b-kube-api-access-jdgvx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153066 master-0 kubenswrapper[7385]: I0319 09:22:15.152597 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153066 master-0 kubenswrapper[7385]: I0319 09:22:15.152626 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153066 master-0 kubenswrapper[7385]: I0319 09:22:15.152652 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c3610f08-aba1-411d-aa6d-811b88acdb7b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153066 master-0 kubenswrapper[7385]: I0319 09:22:15.152705 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153066 master-0 kubenswrapper[7385]: I0319 09:22:15.152772 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c3610f08-aba1-411d-aa6d-811b88acdb7b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153756 master-0 kubenswrapper[7385]: I0319 09:22:15.153383 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.153756 master-0 kubenswrapper[7385]: I0319 09:22:15.153716 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.156414 master-0 kubenswrapper[7385]: I0319 09:22:15.156071 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.175668 master-0 kubenswrapper[7385]: I0319 09:22:15.175632 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdgvx\" (UniqueName: \"kubernetes.io/projected/c3610f08-aba1-411d-aa6d-811b88acdb7b-kube-api-access-jdgvx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:15.362118 master-0 kubenswrapper[7385]: I0319 09:22:15.361424 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:22:16.537232 master-0 kubenswrapper[7385]: I0319 09:22:16.537183 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19a9c80-6289-4a07-bc4a-b1f87a2e96b5" path="/var/lib/kubelet/pods/a19a9c80-6289-4a07-bc4a-b1f87a2e96b5/volumes" Mar 19 09:22:17.643462 master-0 kubenswrapper[7385]: I0319 09:22:17.643360 7385 scope.go:117] "RemoveContainer" containerID="63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4" Mar 19 09:22:17.644160 master-0 kubenswrapper[7385]: E0319 09:22:17.644128 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4\": container with ID starting with 63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4 not found: ID does not exist" containerID="63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4" Mar 19 09:22:17.644202 master-0 kubenswrapper[7385]: I0319 09:22:17.644170 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4"} err="failed to get container status \"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4\": rpc error: code = NotFound desc = could not find container \"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4\": container with ID starting with 63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4 not found: ID does not exist" Mar 19 09:22:17.644202 master-0 kubenswrapper[7385]: I0319 09:22:17.644191 7385 scope.go:117] "RemoveContainer" containerID="69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913" Mar 19 09:22:17.644504 master-0 kubenswrapper[7385]: E0319 09:22:17.644477 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913\": container with ID starting with 69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913 not found: ID does not exist" containerID="69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913" Mar 19 09:22:17.644558 master-0 kubenswrapper[7385]: I0319 09:22:17.644499 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913"} err="failed to get container status \"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913\": rpc error: code = NotFound desc = could not find container \"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913\": container with ID starting with 69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913 not found: ID does not exist" Mar 19 09:22:17.644558 master-0 kubenswrapper[7385]: I0319 09:22:17.644514 7385 scope.go:117] "RemoveContainer" containerID="63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4" Mar 19 09:22:17.644862 master-0 kubenswrapper[7385]: I0319 09:22:17.644832 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4"} err="failed to get container status \"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4\": rpc error: code = NotFound desc = could not find container \"63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4\": container with ID starting with 63c5a10b37f931e39d56bd461dcbe07b4c4417724507c6f4a84e3980e848a1e4 not found: ID does not exist" Mar 19 09:22:17.644862 master-0 kubenswrapper[7385]: I0319 09:22:17.644854 7385 scope.go:117] "RemoveContainer" containerID="69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913" Mar 19 09:22:17.645369 master-0 kubenswrapper[7385]: I0319 09:22:17.645343 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913"} err="failed to get container status \"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913\": rpc error: code = NotFound desc = could not find container \"69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913\": container with ID starting with 69121ee091864b2b22400e8ba1e51ab700da1a9a069a82fe09606eb81dbf8913 not found: ID does not exist" Mar 19 09:22:17.645369 master-0 kubenswrapper[7385]: I0319 09:22:17.645362 7385 scope.go:117] "RemoveContainer" containerID="9d93731d12325d249433b098d7525f5e58d5658366b38d62bbea3cb61acc6b4a" Mar 19 09:22:18.303717 master-0 kubenswrapper[7385]: I0319 09:22:18.303636 7385 scope.go:117] "RemoveContainer" containerID="7f838ca4a31c3116568dc15531f5add527e92f8f330422d67816ce564c3aab44" Mar 19 09:22:18.567916 master-0 kubenswrapper[7385]: W0319 09:22:18.567739 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0491730_604c_4a66_b827_458da88d262b.slice/crio-ea94bf8965f915667b084d40efeb4f5102c63b750c132e105898d2d86dfc6bcf WatchSource:0}: Error finding container ea94bf8965f915667b084d40efeb4f5102c63b750c132e105898d2d86dfc6bcf: Status 404 returned error can't find the container with id ea94bf8965f915667b084d40efeb4f5102c63b750c132e105898d2d86dfc6bcf Mar 19 09:22:18.576055 master-0 kubenswrapper[7385]: I0319 09:22:18.574781 7385 scope.go:117] "RemoveContainer" containerID="6fd6080616c60415dc0f2dfc5aa404bff94ac07e13ce0f4b9e4c32835968ac86" Mar 19 09:22:19.054617 master-0 kubenswrapper[7385]: I0319 09:22:19.052827 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" event={"ID":"cd1425b9-fcd1-4aba-899f-e110eebce626","Type":"ContainerStarted","Data":"e58b99f4da3ded2a286482407189e580812fbd5fde61313a0d8876d046001408"} Mar 19 09:22:19.077851 master-0 kubenswrapper[7385]: I0319 09:22:19.077621 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" event={"ID":"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2","Type":"ContainerStarted","Data":"05c047f1dd1f77466b4da70d7d89474989156a4dc7f05fb84cbb6a93b60f00f0"} Mar 19 09:22:19.084427 master-0 kubenswrapper[7385]: I0319 09:22:19.082192 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" event={"ID":"14438c84-72d3-4f45-88a4-fc7e80df5fb8","Type":"ContainerStarted","Data":"a12d5dd19e5050bee5169ed8458f81e08bf1c3b77b9e78e64ca9507b3147d613"} Mar 19 09:22:19.084427 master-0 kubenswrapper[7385]: I0319 09:22:19.083723 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerStarted","Data":"f9982a7fe2276ecf5bf8dd3bab737e593501425df536f9820a4bd04690b29d97"} Mar 19 09:22:19.090713 master-0 kubenswrapper[7385]: I0319 09:22:19.087361 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-887wl" event={"ID":"72756f50-c970-4ef6-b8ca-88e49f996a74","Type":"ContainerStarted","Data":"83740dd3b2371f1c2f87cb91c9ce70d31804fb806f7201939fdfefa35b3fbd84"} Mar 19 09:22:19.090713 master-0 kubenswrapper[7385]: I0319 09:22:19.089654 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nsnqt" event={"ID":"e0491730-604c-4a66-b827-458da88d262b","Type":"ContainerStarted","Data":"77a3087c73ddd599424b68133d79d976bbd3b7aa7138be0a614f398e67c3b113"} Mar 19 09:22:19.090713 master-0 kubenswrapper[7385]: I0319 09:22:19.089681 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nsnqt" event={"ID":"e0491730-604c-4a66-b827-458da88d262b","Type":"ContainerStarted","Data":"ea94bf8965f915667b084d40efeb4f5102c63b750c132e105898d2d86dfc6bcf"} Mar 19 09:22:19.094641 master-0 kubenswrapper[7385]: I0319 09:22:19.091433 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" event={"ID":"b42aee2f-bffc-4c43-bf20-16d9c67d216c","Type":"ContainerStarted","Data":"0f058a9ddec013369b0b7a99605e047ccdedfe7bc76933c416ebc4ce994018b3"} Mar 19 09:22:19.094641 master-0 kubenswrapper[7385]: I0319 09:22:19.093911 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brpbp" event={"ID":"e8a7e077-3f6c-4efb-9865-cf82480c5da1","Type":"ContainerStarted","Data":"5fddf80528eded5db90a5a83bb8c3ef48b97513cb9fb2edabfb6e5774bd7a4dc"} Mar 19 09:22:19.115528 master-0 kubenswrapper[7385]: I0319 09:22:19.114639 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l26xf" event={"ID":"d504cbc7-5c09-4712-9f7a-c41a6386ef79","Type":"ContainerStarted","Data":"419f8f2138b335bf2ff24f15ef8dc0bc95062c30db2e877001baa8ff122cf0b9"} Mar 19 09:22:19.122051 master-0 kubenswrapper[7385]: I0319 09:22:19.121982 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cczg" event={"ID":"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5","Type":"ContainerStarted","Data":"7317e2fe3007499f7bd9b22966e52c4d2f14f432eb7fd09f964544754c6d642d"} Mar 19 09:22:19.630421 master-0 kubenswrapper[7385]: I0319 09:22:19.630243 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" podStartSLOduration=3.973354607 podStartE2EDuration="28.630215361s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:52.999386361 +0000 UTC m=+208.673816062" lastFinishedPulling="2026-03-19 09:22:17.656247115 +0000 UTC m=+233.330676816" observedRunningTime="2026-03-19 09:22:19.620878206 +0000 UTC m=+235.295307907" watchObservedRunningTime="2026-03-19 09:22:19.630215361 +0000 UTC m=+235.304645072" Mar 19 09:22:20.141919 master-0 kubenswrapper[7385]: I0319 09:22:20.141786 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerStarted","Data":"b22e2a66927978f11297a106e9cb42a300f20b0e282ee6bf95bd7f7244d1990b"} Mar 19 09:22:20.141919 master-0 kubenswrapper[7385]: I0319 09:22:20.141843 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerStarted","Data":"ccbf8c179749d131ecca685672edda794d3d9e56e155b18ba174f1ad15f4ce67"} Mar 19 09:22:20.141919 master-0 kubenswrapper[7385]: I0319 09:22:20.141853 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerStarted","Data":"774e8a3e480c092251698110fbb5b53d79965d955c1c4ce2867552029267208f"} Mar 19 09:22:20.146316 master-0 kubenswrapper[7385]: I0319 09:22:20.145725 7385 generic.go:334] "Generic (PLEG): container finished" podID="72756f50-c970-4ef6-b8ca-88e49f996a74" containerID="83740dd3b2371f1c2f87cb91c9ce70d31804fb806f7201939fdfefa35b3fbd84" exitCode=0 Mar 19 09:22:20.146316 master-0 kubenswrapper[7385]: I0319 09:22:20.145821 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-887wl" event={"ID":"72756f50-c970-4ef6-b8ca-88e49f996a74","Type":"ContainerDied","Data":"83740dd3b2371f1c2f87cb91c9ce70d31804fb806f7201939fdfefa35b3fbd84"} Mar 19 09:22:20.152258 master-0 kubenswrapper[7385]: I0319 09:22:20.150002 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" event={"ID":"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2","Type":"ContainerStarted","Data":"11c939b60a227283973184abab4a74f274bf3ad0ae2f5315dbbcb266dc260e1c"} Mar 19 09:22:20.152258 master-0 kubenswrapper[7385]: I0319 09:22:20.150036 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" event={"ID":"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2","Type":"ContainerStarted","Data":"80f53658430226970cbb3d6da28515a974dd26f2b6648393dad26f820a6b95cf"} Mar 19 09:22:20.152258 master-0 kubenswrapper[7385]: I0319 09:22:20.151798 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerStarted","Data":"5cb6c10ede1632045f4c6b7b809db52b73fe2590e0eca9bb5097244794291556"} Mar 19 09:22:20.153587 master-0 kubenswrapper[7385]: I0319 09:22:20.153502 7385 generic.go:334] "Generic (PLEG): container finished" podID="e8a7e077-3f6c-4efb-9865-cf82480c5da1" containerID="5fddf80528eded5db90a5a83bb8c3ef48b97513cb9fb2edabfb6e5774bd7a4dc" exitCode=0 Mar 19 09:22:20.153587 master-0 kubenswrapper[7385]: I0319 09:22:20.153574 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brpbp" event={"ID":"e8a7e077-3f6c-4efb-9865-cf82480c5da1","Type":"ContainerDied","Data":"5fddf80528eded5db90a5a83bb8c3ef48b97513cb9fb2edabfb6e5774bd7a4dc"} Mar 19 09:22:20.158630 master-0 kubenswrapper[7385]: I0319 09:22:20.158585 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" event={"ID":"0adaea87-67d0-41a7-a1f3-855fdd483aca","Type":"ContainerStarted","Data":"b8d8bb5c4f549f8fe91d978ba3f1e5e9db160e7dc83058688c2592f3f76d736c"} Mar 19 09:22:20.159005 master-0 kubenswrapper[7385]: I0319 09:22:20.158971 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:22:20.160636 master-0 kubenswrapper[7385]: I0319 09:22:20.160601 7385 generic.go:334] "Generic (PLEG): container finished" podID="d504cbc7-5c09-4712-9f7a-c41a6386ef79" containerID="419f8f2138b335bf2ff24f15ef8dc0bc95062c30db2e877001baa8ff122cf0b9" exitCode=0 Mar 19 09:22:20.160680 master-0 kubenswrapper[7385]: I0319 09:22:20.160661 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l26xf" event={"ID":"d504cbc7-5c09-4712-9f7a-c41a6386ef79","Type":"ContainerDied","Data":"419f8f2138b335bf2ff24f15ef8dc0bc95062c30db2e877001baa8ff122cf0b9"} Mar 19 09:22:20.162717 master-0 kubenswrapper[7385]: I0319 09:22:20.162645 7385 generic.go:334] "Generic (PLEG): container finished" podID="5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5" containerID="7317e2fe3007499f7bd9b22966e52c4d2f14f432eb7fd09f964544754c6d642d" exitCode=0 Mar 19 09:22:20.163002 master-0 kubenswrapper[7385]: I0319 09:22:20.162718 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cczg" event={"ID":"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5","Type":"ContainerDied","Data":"7317e2fe3007499f7bd9b22966e52c4d2f14f432eb7fd09f964544754c6d642d"} Mar 19 09:22:20.167813 master-0 kubenswrapper[7385]: I0319 09:22:20.167769 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:22:20.733596 master-0 kubenswrapper[7385]: I0319 09:22:20.728506 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:22:20.733596 master-0 kubenswrapper[7385]: I0319 09:22:20.730734 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" podStartSLOduration=4.769453671 podStartE2EDuration="29.73068598s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:52.982287582 +0000 UTC m=+208.656717283" lastFinishedPulling="2026-03-19 09:22:17.943519861 +0000 UTC m=+233.617949592" observedRunningTime="2026-03-19 09:22:20.724661233 +0000 UTC m=+236.399090994" watchObservedRunningTime="2026-03-19 09:22:20.73068598 +0000 UTC m=+236.405115701" Mar 19 09:22:20.736597 master-0 kubenswrapper[7385]: I0319 09:22:20.735867 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:20.736597 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:20.736597 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:20.736597 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:20.736597 master-0 kubenswrapper[7385]: I0319 09:22:20.735960 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:21.731773 master-0 kubenswrapper[7385]: I0319 09:22:21.731647 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:21.731773 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:21.731773 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:21.731773 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:21.731773 master-0 kubenswrapper[7385]: I0319 09:22:21.731713 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:21.815893 master-0 kubenswrapper[7385]: I0319 09:22:21.815789 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nsnqt" podStartSLOduration=17.815760953 podStartE2EDuration="17.815760953s" podCreationTimestamp="2026-03-19 09:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:22:21.809203549 +0000 UTC m=+237.483633280" watchObservedRunningTime="2026-03-19 09:22:21.815760953 +0000 UTC m=+237.490190654" Mar 19 09:22:22.731718 master-0 kubenswrapper[7385]: I0319 09:22:22.731644 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:22.731718 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:22.731718 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:22.731718 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:22.732348 master-0 kubenswrapper[7385]: I0319 09:22:22.731743 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:23.055567 master-0 kubenswrapper[7385]: I0319 09:22:23.055129 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" podStartSLOduration=283.055110508 podStartE2EDuration="4m43.055110508s" podCreationTimestamp="2026-03-19 09:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:22:23.048075967 +0000 UTC m=+238.722505678" watchObservedRunningTime="2026-03-19 09:22:23.055110508 +0000 UTC m=+238.729540219" Mar 19 09:22:23.146234 master-0 kubenswrapper[7385]: I0319 09:22:23.146159 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" podStartSLOduration=20.603700072 podStartE2EDuration="32.146137417s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:22:07.037055057 +0000 UTC m=+222.711484798" lastFinishedPulling="2026-03-19 09:22:18.579492442 +0000 UTC m=+234.253922143" observedRunningTime="2026-03-19 09:22:23.141369451 +0000 UTC m=+238.815799172" watchObservedRunningTime="2026-03-19 09:22:23.146137417 +0000 UTC m=+238.820567128" Mar 19 09:22:23.198142 master-0 kubenswrapper[7385]: I0319 09:22:23.197945 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" podStartSLOduration=8.197926293 podStartE2EDuration="8.197926293s" podCreationTimestamp="2026-03-19 09:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:22:23.191220693 +0000 UTC m=+238.865650394" watchObservedRunningTime="2026-03-19 09:22:23.197926293 +0000 UTC m=+238.872355994" Mar 19 09:22:23.246810 master-0 kubenswrapper[7385]: I0319 09:22:23.246258 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" podStartSLOduration=23.246238035 podStartE2EDuration="23.246238035s" podCreationTimestamp="2026-03-19 09:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:22:23.242061297 +0000 UTC m=+238.916491018" watchObservedRunningTime="2026-03-19 09:22:23.246238035 +0000 UTC m=+238.920667736" Mar 19 09:22:23.252633 master-0 kubenswrapper[7385]: I0319 09:22:23.247647 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podStartSLOduration=13.389715826 podStartE2EDuration="32.2476408s" podCreationTimestamp="2026-03-19 09:21:51 +0000 UTC" firstStartedPulling="2026-03-19 09:21:59.787453084 +0000 UTC m=+215.461882785" lastFinishedPulling="2026-03-19 09:22:18.645378058 +0000 UTC m=+234.319807759" observedRunningTime="2026-03-19 09:22:23.222958853 +0000 UTC m=+238.897388574" watchObservedRunningTime="2026-03-19 09:22:23.2476408 +0000 UTC m=+238.922070501" Mar 19 09:22:23.306402 master-0 kubenswrapper[7385]: I0319 09:22:23.306366 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm"] Mar 19 09:22:23.311662 master-0 kubenswrapper[7385]: I0319 09:22:23.307315 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.312496 master-0 kubenswrapper[7385]: I0319 09:22:23.312421 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 09:22:23.312496 master-0 kubenswrapper[7385]: I0319 09:22:23.312447 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 09:22:23.312496 master-0 kubenswrapper[7385]: I0319 09:22:23.312458 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 09:22:23.312690 master-0 kubenswrapper[7385]: I0319 09:22:23.312559 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-45rfb" Mar 19 09:22:23.325091 master-0 kubenswrapper[7385]: I0319 09:22:23.324614 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm"] Mar 19 09:22:23.383201 master-0 kubenswrapper[7385]: I0319 09:22:23.383142 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.383201 master-0 kubenswrapper[7385]: I0319 09:22:23.383197 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w48g\" (UniqueName: \"kubernetes.io/projected/3f81774a-22a4-4335-961b-04e53e0f3b5e-kube-api-access-2w48g\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.383478 master-0 kubenswrapper[7385]: I0319 09:22:23.383233 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.383478 master-0 kubenswrapper[7385]: I0319 09:22:23.383267 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.484054 master-0 kubenswrapper[7385]: I0319 09:22:23.484002 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.484054 master-0 kubenswrapper[7385]: I0319 09:22:23.484062 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w48g\" (UniqueName: \"kubernetes.io/projected/3f81774a-22a4-4335-961b-04e53e0f3b5e-kube-api-access-2w48g\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.484340 master-0 kubenswrapper[7385]: I0319 09:22:23.484293 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.484484 master-0 kubenswrapper[7385]: I0319 09:22:23.484450 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.489563 master-0 kubenswrapper[7385]: I0319 09:22:23.485139 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.498117 master-0 kubenswrapper[7385]: I0319 09:22:23.498039 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.498256 master-0 kubenswrapper[7385]: I0319 09:22:23.498227 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.503558 master-0 kubenswrapper[7385]: I0319 09:22:23.503232 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w48g\" (UniqueName: \"kubernetes.io/projected/3f81774a-22a4-4335-961b-04e53e0f3b5e-kube-api-access-2w48g\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.661913 master-0 kubenswrapper[7385]: I0319 09:22:23.661825 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:22:23.731637 master-0 kubenswrapper[7385]: I0319 09:22:23.731576 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:23.731637 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:23.731637 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:23.731637 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:23.731637 master-0 kubenswrapper[7385]: I0319 09:22:23.731638 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:24.188154 master-0 kubenswrapper[7385]: I0319 09:22:24.188080 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l26xf" event={"ID":"d504cbc7-5c09-4712-9f7a-c41a6386ef79","Type":"ContainerStarted","Data":"d6fcdada8ddd56d09c809855397b5ce85d652c8a5dbe626c939314d2ec6b7e50"} Mar 19 09:22:24.190648 master-0 kubenswrapper[7385]: I0319 09:22:24.190615 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7cczg" event={"ID":"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5","Type":"ContainerStarted","Data":"a707abcd1f2b27ef3e6efad03096493eb68e56ef027a4e47aab24592519bee33"} Mar 19 09:22:24.192933 master-0 kubenswrapper[7385]: I0319 09:22:24.192878 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-887wl" event={"ID":"72756f50-c970-4ef6-b8ca-88e49f996a74","Type":"ContainerStarted","Data":"3d685a741f8c7a95447974b0702f75af2cedd304bd934e7dff9f16b7d5bd0bf0"} Mar 19 09:22:24.194515 master-0 kubenswrapper[7385]: I0319 09:22:24.194480 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brpbp" event={"ID":"e8a7e077-3f6c-4efb-9865-cf82480c5da1","Type":"ContainerStarted","Data":"bf9f705551895cee23f5e2cd3f0a16e8bbc1e2545c5613363013d4aa228de451"} Mar 19 09:22:24.731511 master-0 kubenswrapper[7385]: I0319 09:22:24.731470 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:24.731511 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:24.731511 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:24.731511 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:24.731903 master-0 kubenswrapper[7385]: I0319 09:22:24.731872 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:24.884323 master-0 kubenswrapper[7385]: I0319 09:22:24.882023 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l26xf" podStartSLOduration=16.859225433 podStartE2EDuration="48.882008218s" podCreationTimestamp="2026-03-19 09:21:36 +0000 UTC" firstStartedPulling="2026-03-19 09:21:51.062783439 +0000 UTC m=+206.737213140" lastFinishedPulling="2026-03-19 09:22:23.085566214 +0000 UTC m=+238.759995925" observedRunningTime="2026-03-19 09:22:24.87659056 +0000 UTC m=+240.551020271" watchObservedRunningTime="2026-03-19 09:22:24.882008218 +0000 UTC m=+240.556437919" Mar 19 09:22:24.884323 master-0 kubenswrapper[7385]: I0319 09:22:24.883069 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm"] Mar 19 09:22:25.200908 master-0 kubenswrapper[7385]: I0319 09:22:25.200781 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" event={"ID":"3f81774a-22a4-4335-961b-04e53e0f3b5e","Type":"ContainerStarted","Data":"172085267d003a11af66385fae45641af5f2ea573dfe38357436fa95e4bfc2cb"} Mar 19 09:22:25.729022 master-0 kubenswrapper[7385]: I0319 09:22:25.728922 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7cczg" podStartSLOduration=9.804669526 podStartE2EDuration="41.728900184s" podCreationTimestamp="2026-03-19 09:21:44 +0000 UTC" firstStartedPulling="2026-03-19 09:21:51.06585811 +0000 UTC m=+206.740287831" lastFinishedPulling="2026-03-19 09:22:22.990088768 +0000 UTC m=+238.664518489" observedRunningTime="2026-03-19 09:22:25.723458646 +0000 UTC m=+241.397888387" watchObservedRunningTime="2026-03-19 09:22:25.728900184 +0000 UTC m=+241.403329895" Mar 19 09:22:25.733349 master-0 kubenswrapper[7385]: I0319 09:22:25.733274 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:25.733349 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:25.733349 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:25.733349 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:25.733719 master-0 kubenswrapper[7385]: I0319 09:22:25.733371 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:26.693516 master-0 kubenswrapper[7385]: I0319 09:22:26.684722 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brpbp" podStartSLOduration=17.749234668 podStartE2EDuration="49.684703185s" podCreationTimestamp="2026-03-19 09:21:37 +0000 UTC" firstStartedPulling="2026-03-19 09:21:51.056235275 +0000 UTC m=+206.730664996" lastFinishedPulling="2026-03-19 09:22:22.991703802 +0000 UTC m=+238.666133513" observedRunningTime="2026-03-19 09:22:26.21777949 +0000 UTC m=+241.892209211" watchObservedRunningTime="2026-03-19 09:22:26.684703185 +0000 UTC m=+242.359132896" Mar 19 09:22:26.715101 master-0 kubenswrapper[7385]: I0319 09:22:26.715015 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-887wl" podStartSLOduration=18.775562261 podStartE2EDuration="50.714994677s" podCreationTimestamp="2026-03-19 09:21:36 +0000 UTC" firstStartedPulling="2026-03-19 09:21:51.051675286 +0000 UTC m=+206.726104987" lastFinishedPulling="2026-03-19 09:22:22.991107692 +0000 UTC m=+238.665537403" observedRunningTime="2026-03-19 09:22:26.686300948 +0000 UTC m=+242.360730659" watchObservedRunningTime="2026-03-19 09:22:26.714994677 +0000 UTC m=+242.389424378" Mar 19 09:22:26.734366 master-0 kubenswrapper[7385]: I0319 09:22:26.734274 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:26.734366 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:26.734366 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:26.734366 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:26.735905 master-0 kubenswrapper[7385]: I0319 09:22:26.734372 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:27.731127 master-0 kubenswrapper[7385]: I0319 09:22:27.731069 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:27.731127 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:27.731127 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:27.731127 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:27.731757 master-0 kubenswrapper[7385]: I0319 09:22:27.731142 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:28.218450 master-0 kubenswrapper[7385]: I0319 09:22:28.218398 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" event={"ID":"3f81774a-22a4-4335-961b-04e53e0f3b5e","Type":"ContainerStarted","Data":"40beff53e25b3c4544aee82ab5fcc802983bcd5fb5978dbd3164535d4e3d2f6e"} Mar 19 09:22:28.218450 master-0 kubenswrapper[7385]: I0319 09:22:28.218451 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" event={"ID":"3f81774a-22a4-4335-961b-04e53e0f3b5e","Type":"ContainerStarted","Data":"0ae67213675cc5948072e9210d4fd2398b1d5728be96f5e3d34bfe623cb75520"} Mar 19 09:22:28.388472 master-0 kubenswrapper[7385]: I0319 09:22:28.388383 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" podStartSLOduration=2.998328415 podStartE2EDuration="5.388357061s" podCreationTimestamp="2026-03-19 09:22:23 +0000 UTC" firstStartedPulling="2026-03-19 09:22:24.910669786 +0000 UTC m=+240.585099487" lastFinishedPulling="2026-03-19 09:22:27.300698412 +0000 UTC m=+242.975128133" observedRunningTime="2026-03-19 09:22:28.384936569 +0000 UTC m=+244.059366290" watchObservedRunningTime="2026-03-19 09:22:28.388357061 +0000 UTC m=+244.062786792" Mar 19 09:22:28.730474 master-0 kubenswrapper[7385]: I0319 09:22:28.730413 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:28.730474 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:28.730474 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:28.730474 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:28.730743 master-0 kubenswrapper[7385]: I0319 09:22:28.730491 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:29.728796 master-0 kubenswrapper[7385]: I0319 09:22:29.728737 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:22:29.730588 master-0 kubenswrapper[7385]: I0319 09:22:29.730527 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:29.730588 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:29.730588 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:29.730588 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:29.730864 master-0 kubenswrapper[7385]: I0319 09:22:29.730605 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:29.994714 master-0 kubenswrapper[7385]: I0319 09:22:29.994574 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:22:29.994714 master-0 kubenswrapper[7385]: I0319 09:22:29.994629 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:22:30.009579 master-0 kubenswrapper[7385]: I0319 09:22:30.009515 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:22:30.009917 master-0 kubenswrapper[7385]: I0319 09:22:30.009877 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:22:30.027620 master-0 kubenswrapper[7385]: I0319 09:22:30.027566 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:22:30.027821 master-0 kubenswrapper[7385]: I0319 09:22:30.027641 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:22:30.045908 master-0 kubenswrapper[7385]: I0319 09:22:30.045849 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:22:30.047224 master-0 kubenswrapper[7385]: I0319 09:22:30.046743 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:22:30.047224 master-0 kubenswrapper[7385]: I0319 09:22:30.046786 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:22:30.047499 master-0 kubenswrapper[7385]: I0319 09:22:30.047405 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:22:30.071191 master-0 kubenswrapper[7385]: I0319 09:22:30.067967 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:22:30.092025 master-0 kubenswrapper[7385]: I0319 09:22:30.091966 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:22:30.277012 master-0 kubenswrapper[7385]: I0319 09:22:30.276644 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:22:30.278112 master-0 kubenswrapper[7385]: I0319 09:22:30.278030 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:22:30.285235 master-0 kubenswrapper[7385]: I0319 09:22:30.285177 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:22:30.288704 master-0 kubenswrapper[7385]: I0319 09:22:30.288678 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:22:30.634829 master-0 kubenswrapper[7385]: I0319 09:22:30.634776 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-k6kn8"] Mar 19 09:22:30.635931 master-0 kubenswrapper[7385]: I0319 09:22:30.635905 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.637785 master-0 kubenswrapper[7385]: I0319 09:22:30.637638 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-sx4vp" Mar 19 09:22:30.642639 master-0 kubenswrapper[7385]: I0319 09:22:30.638525 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 09:22:30.642639 master-0 kubenswrapper[7385]: I0319 09:22:30.638909 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 09:22:30.647528 master-0 kubenswrapper[7385]: I0319 09:22:30.645926 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr"] Mar 19 09:22:30.648144 master-0 kubenswrapper[7385]: I0319 09:22:30.648017 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.650196 master-0 kubenswrapper[7385]: I0319 09:22:30.650181 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 09:22:30.650513 master-0 kubenswrapper[7385]: I0319 09:22:30.650218 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 09:22:30.655664 master-0 kubenswrapper[7385]: I0319 09:22:30.654134 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-qxk5n" Mar 19 09:22:30.666184 master-0 kubenswrapper[7385]: I0319 09:22:30.664589 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr"] Mar 19 09:22:30.679321 master-0 kubenswrapper[7385]: I0319 09:22:30.679264 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-d46h5"] Mar 19 09:22:30.681505 master-0 kubenswrapper[7385]: I0319 09:22:30.680858 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.684998 master-0 kubenswrapper[7385]: I0319 09:22:30.684946 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 09:22:30.685160 master-0 kubenswrapper[7385]: I0319 09:22:30.685145 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 09:22:30.685310 master-0 kubenswrapper[7385]: I0319 09:22:30.685285 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 09:22:30.687147 master-0 kubenswrapper[7385]: I0319 09:22:30.686836 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-gmb5f" Mar 19 09:22:30.690133 master-0 kubenswrapper[7385]: I0319 09:22:30.689928 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-d46h5"] Mar 19 09:22:30.699456 master-0 kubenswrapper[7385]: I0319 09:22:30.699364 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-textfile\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.699456 master-0 kubenswrapper[7385]: I0319 09:22:30.699425 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssdjz\" (UniqueName: \"kubernetes.io/projected/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-kube-api-access-ssdjz\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.699611 master-0 kubenswrapper[7385]: I0319 09:22:30.699482 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.699611 master-0 kubenswrapper[7385]: I0319 09:22:30.699511 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-wtmp\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.699611 master-0 kubenswrapper[7385]: I0319 09:22:30.699579 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-api-access-nfmmt\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.699705 master-0 kubenswrapper[7385]: I0319 09:22:30.699622 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.699705 master-0 kubenswrapper[7385]: I0319 09:22:30.699665 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxz2j\" (UniqueName: \"kubernetes.io/projected/1b230b9d-529c-4b28-bc73-659a28d7961a-kube-api-access-mxz2j\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.699705 master-0 kubenswrapper[7385]: I0319 09:22:30.699697 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.699792 master-0 kubenswrapper[7385]: I0319 09:22:30.699728 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.699792 master-0 kubenswrapper[7385]: I0319 09:22:30.699755 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.699792 master-0 kubenswrapper[7385]: I0319 09:22:30.699778 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.699876 master-0 kubenswrapper[7385]: I0319 09:22:30.699804 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.699912 master-0 kubenswrapper[7385]: I0319 09:22:30.699874 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-sys\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.699912 master-0 kubenswrapper[7385]: I0319 09:22:30.699901 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.699983 master-0 kubenswrapper[7385]: I0319 09:22:30.699938 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.700018 master-0 kubenswrapper[7385]: I0319 09:22:30.699983 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.700053 master-0 kubenswrapper[7385]: I0319 09:22:30.700014 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-root\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.700053 master-0 kubenswrapper[7385]: I0319 09:22:30.700037 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3883b232-5772-460f-9e94-b4cbc7b7e638-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.739118 master-0 kubenswrapper[7385]: I0319 09:22:30.738266 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:30.739118 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:30.739118 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:30.739118 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:30.739118 master-0 kubenswrapper[7385]: I0319 09:22:30.738320 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:30.800403 master-0 kubenswrapper[7385]: I0319 09:22:30.800343 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.800403 master-0 kubenswrapper[7385]: I0319 09:22:30.800384 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3883b232-5772-460f-9e94-b4cbc7b7e638-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.800403 master-0 kubenswrapper[7385]: I0319 09:22:30.800403 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-root\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801087 master-0 kubenswrapper[7385]: I0319 09:22:30.801055 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-textfile\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801087 master-0 kubenswrapper[7385]: I0319 09:22:30.801074 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-root\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801218 master-0 kubenswrapper[7385]: I0319 09:22:30.801105 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssdjz\" (UniqueName: \"kubernetes.io/projected/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-kube-api-access-ssdjz\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801218 master-0 kubenswrapper[7385]: I0319 09:22:30.801144 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.801218 master-0 kubenswrapper[7385]: I0319 09:22:30.801171 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-wtmp\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801218 master-0 kubenswrapper[7385]: I0319 09:22:30.801202 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-api-access-nfmmt\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.801218 master-0 kubenswrapper[7385]: I0319 09:22:30.801221 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801247 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxz2j\" (UniqueName: \"kubernetes.io/projected/1b230b9d-529c-4b28-bc73-659a28d7961a-kube-api-access-mxz2j\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801274 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801305 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801329 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801352 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801380 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801430 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-sys\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801453 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801464 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3883b232-5772-460f-9e94-b4cbc7b7e638-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.801447 master-0 kubenswrapper[7385]: I0319 09:22:30.801490 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: E0319 09:22:30.802141 7385 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: E0319 09:22:30.802237 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls podName:1b230b9d-529c-4b28-bc73-659a28d7961a nodeName:}" failed. No retries permitted until 2026-03-19 09:22:31.302214518 +0000 UTC m=+246.976644319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-84ztr" (UID: "1b230b9d-529c-4b28-bc73-659a28d7961a") : secret "openshift-state-metrics-tls" not found Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.802326 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-wtmp\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.802156 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.803006 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-textfile\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.803415 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.803875 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-sys\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.804028 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.804307 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.805278 master-0 kubenswrapper[7385]: I0319 09:22:30.804578 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.817015 master-0 kubenswrapper[7385]: I0319 09:22:30.806017 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.817015 master-0 kubenswrapper[7385]: I0319 09:22:30.806263 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.822605 master-0 kubenswrapper[7385]: I0319 09:22:30.820018 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.822605 master-0 kubenswrapper[7385]: I0319 09:22:30.820479 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.822605 master-0 kubenswrapper[7385]: I0319 09:22:30.821621 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxz2j\" (UniqueName: \"kubernetes.io/projected/1b230b9d-529c-4b28-bc73-659a28d7961a-kube-api-access-mxz2j\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:30.822605 master-0 kubenswrapper[7385]: I0319 09:22:30.822033 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-api-access-nfmmt\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:30.822605 master-0 kubenswrapper[7385]: I0319 09:22:30.822193 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssdjz\" (UniqueName: \"kubernetes.io/projected/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-kube-api-access-ssdjz\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.961815 master-0 kubenswrapper[7385]: I0319 09:22:30.961702 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:22:30.976981 master-0 kubenswrapper[7385]: W0319 09:22:30.976298 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e46a34_8a00_4bb3_869b_8a5911ef6cf8.slice/crio-8ee3b1585121acd28cac002efd25a4951438f7aba1490780501fdecb04a7dd12 WatchSource:0}: Error finding container 8ee3b1585121acd28cac002efd25a4951438f7aba1490780501fdecb04a7dd12: Status 404 returned error can't find the container with id 8ee3b1585121acd28cac002efd25a4951438f7aba1490780501fdecb04a7dd12 Mar 19 09:22:31.006972 master-0 kubenswrapper[7385]: I0319 09:22:31.006931 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:22:31.237458 master-0 kubenswrapper[7385]: I0319 09:22:31.237330 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k6kn8" event={"ID":"31e46a34-8a00-4bb3-869b-8a5911ef6cf8","Type":"ContainerStarted","Data":"8ee3b1585121acd28cac002efd25a4951438f7aba1490780501fdecb04a7dd12"} Mar 19 09:22:31.310729 master-0 kubenswrapper[7385]: I0319 09:22:31.310676 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:31.314377 master-0 kubenswrapper[7385]: I0319 09:22:31.314339 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:31.465680 master-0 kubenswrapper[7385]: I0319 09:22:31.465640 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-d46h5"] Mar 19 09:22:31.573579 master-0 kubenswrapper[7385]: I0319 09:22:31.573431 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:22:31.733490 master-0 kubenswrapper[7385]: I0319 09:22:31.733386 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:31.733490 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:31.733490 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:31.733490 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:31.733718 master-0 kubenswrapper[7385]: I0319 09:22:31.733536 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:32.243104 master-0 kubenswrapper[7385]: I0319 09:22:32.243040 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" event={"ID":"3883b232-5772-460f-9e94-b4cbc7b7e638","Type":"ContainerStarted","Data":"3cd6d09fe73a460b498f00d76bd556cdb55771a774477420bab191c7dcd68863"} Mar 19 09:22:32.462097 master-0 kubenswrapper[7385]: I0319 09:22:32.462049 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr"] Mar 19 09:22:32.469108 master-0 kubenswrapper[7385]: W0319 09:22:32.469058 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b230b9d_529c_4b28_bc73_659a28d7961a.slice/crio-6937b999e172420380651c53fc5e6680d5943c027cccaefd6221f5dee41afb2c WatchSource:0}: Error finding container 6937b999e172420380651c53fc5e6680d5943c027cccaefd6221f5dee41afb2c: Status 404 returned error can't find the container with id 6937b999e172420380651c53fc5e6680d5943c027cccaefd6221f5dee41afb2c Mar 19 09:22:32.738586 master-0 kubenswrapper[7385]: I0319 09:22:32.737986 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:32.738586 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:32.738586 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:32.738586 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:32.740508 master-0 kubenswrapper[7385]: I0319 09:22:32.740436 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:33.257007 master-0 kubenswrapper[7385]: I0319 09:22:33.256942 7385 generic.go:334] "Generic (PLEG): container finished" podID="70e8c62b-97c3-4c0c-85d3-f660118831fd" containerID="07f85a8394cfe2927824d6dd40beca1cf31136db472d1b09c7b6f5f1e6dae94f" exitCode=0 Mar 19 09:22:33.257617 master-0 kubenswrapper[7385]: I0319 09:22:33.257073 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerDied","Data":"07f85a8394cfe2927824d6dd40beca1cf31136db472d1b09c7b6f5f1e6dae94f"} Mar 19 09:22:33.257617 master-0 kubenswrapper[7385]: I0319 09:22:33.257179 7385 scope.go:117] "RemoveContainer" containerID="13eaf9fb6b5973dc7a39cf4a595a1daae2d0c0b608e70d2c41f378466d42eb35" Mar 19 09:22:33.258041 master-0 kubenswrapper[7385]: I0319 09:22:33.257992 7385 scope.go:117] "RemoveContainer" containerID="07f85a8394cfe2927824d6dd40beca1cf31136db472d1b09c7b6f5f1e6dae94f" Mar 19 09:22:33.259137 master-0 kubenswrapper[7385]: E0319 09:22:33.258437 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-h4zrl_openshift-insights(70e8c62b-97c3-4c0c-85d3-f660118831fd)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" podUID="70e8c62b-97c3-4c0c-85d3-f660118831fd" Mar 19 09:22:33.260365 master-0 kubenswrapper[7385]: I0319 09:22:33.260322 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" event={"ID":"1b230b9d-529c-4b28-bc73-659a28d7961a","Type":"ContainerStarted","Data":"8f9d780a62629698d3f39b06519406d138cb6fe6eb63eb8ec74c9bd75acc2af5"} Mar 19 09:22:33.260416 master-0 kubenswrapper[7385]: I0319 09:22:33.260387 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" event={"ID":"1b230b9d-529c-4b28-bc73-659a28d7961a","Type":"ContainerStarted","Data":"6937b999e172420380651c53fc5e6680d5943c027cccaefd6221f5dee41afb2c"} Mar 19 09:22:33.730948 master-0 kubenswrapper[7385]: I0319 09:22:33.730852 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:33.730948 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:33.730948 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:33.730948 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:33.730948 master-0 kubenswrapper[7385]: I0319 09:22:33.730916 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:34.268851 master-0 kubenswrapper[7385]: I0319 09:22:34.268787 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" event={"ID":"1b230b9d-529c-4b28-bc73-659a28d7961a","Type":"ContainerStarted","Data":"ed4ead022d87dc9c30aa684036b51cb9207f1793f28ffb4ef63a8745591d019e"} Mar 19 09:22:34.733043 master-0 kubenswrapper[7385]: I0319 09:22:34.730527 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:34.733043 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:34.733043 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:34.733043 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:34.733043 master-0 kubenswrapper[7385]: I0319 09:22:34.730608 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:35.732226 master-0 kubenswrapper[7385]: I0319 09:22:35.732117 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:35.732226 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:35.732226 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:35.732226 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:35.733224 master-0 kubenswrapper[7385]: I0319 09:22:35.732235 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:36.731568 master-0 kubenswrapper[7385]: I0319 09:22:36.731505 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:36.731568 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:36.731568 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:36.731568 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:36.731929 master-0 kubenswrapper[7385]: I0319 09:22:36.731598 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:37.730465 master-0 kubenswrapper[7385]: I0319 09:22:37.730415 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:37.730465 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:37.730465 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:37.730465 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:37.731135 master-0 kubenswrapper[7385]: I0319 09:22:37.730479 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:38.730804 master-0 kubenswrapper[7385]: I0319 09:22:38.730761 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:38.730804 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:38.730804 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:38.730804 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:38.731572 master-0 kubenswrapper[7385]: I0319 09:22:38.730821 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:39.222582 master-0 kubenswrapper[7385]: I0319 09:22:39.222488 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7c64897fc5-qj6vj"] Mar 19 09:22:39.224076 master-0 kubenswrapper[7385]: I0319 09:22:39.223720 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.229219 master-0 kubenswrapper[7385]: I0319 09:22:39.229071 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 09:22:39.229219 master-0 kubenswrapper[7385]: I0319 09:22:39.229101 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 09:22:39.229219 master-0 kubenswrapper[7385]: I0319 09:22:39.229209 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4tu9qkfhfujlu" Mar 19 09:22:39.229425 master-0 kubenswrapper[7385]: I0319 09:22:39.229322 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dq4bt" Mar 19 09:22:39.229425 master-0 kubenswrapper[7385]: I0319 09:22:39.229390 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 09:22:39.229506 master-0 kubenswrapper[7385]: I0319 09:22:39.229429 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 09:22:39.239485 master-0 kubenswrapper[7385]: I0319 09:22:39.237213 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.239485 master-0 kubenswrapper[7385]: I0319 09:22:39.237262 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.239485 master-0 kubenswrapper[7385]: I0319 09:22:39.237333 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4jnj\" (UniqueName: \"kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.239485 master-0 kubenswrapper[7385]: I0319 09:22:39.237364 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.239485 master-0 kubenswrapper[7385]: I0319 09:22:39.237409 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.239485 master-0 kubenswrapper[7385]: I0319 09:22:39.237438 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.239485 master-0 kubenswrapper[7385]: I0319 09:22:39.237459 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.248012 master-0 kubenswrapper[7385]: I0319 09:22:39.247916 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7c64897fc5-qj6vj"] Mar 19 09:22:39.298509 master-0 kubenswrapper[7385]: I0319 09:22:39.298456 7385 generic.go:334] "Generic (PLEG): container finished" podID="31e46a34-8a00-4bb3-869b-8a5911ef6cf8" containerID="cb34860bbc4a03b9ef51077399d9cc73004aea17ddd2a3769650b04afea52e7b" exitCode=0 Mar 19 09:22:39.298745 master-0 kubenswrapper[7385]: I0319 09:22:39.298534 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k6kn8" event={"ID":"31e46a34-8a00-4bb3-869b-8a5911ef6cf8","Type":"ContainerDied","Data":"cb34860bbc4a03b9ef51077399d9cc73004aea17ddd2a3769650b04afea52e7b"} Mar 19 09:22:39.309230 master-0 kubenswrapper[7385]: I0319 09:22:39.309181 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" event={"ID":"3883b232-5772-460f-9e94-b4cbc7b7e638","Type":"ContainerStarted","Data":"a451c7dd403a39c27872e4fc5ef8f0c982cedf45ca97382b489479219d522de5"} Mar 19 09:22:39.309230 master-0 kubenswrapper[7385]: I0319 09:22:39.309224 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" event={"ID":"3883b232-5772-460f-9e94-b4cbc7b7e638","Type":"ContainerStarted","Data":"938c05934bfe3e7ecbcc20fd04527c13e411442dfcdd9b7d6a0cf585c0a76947"} Mar 19 09:22:39.309230 master-0 kubenswrapper[7385]: I0319 09:22:39.309233 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" event={"ID":"3883b232-5772-460f-9e94-b4cbc7b7e638","Type":"ContainerStarted","Data":"9f920ab1fe22e22bd1c13721596a6cd3c14ac91fcc865e1551cf8602ccb78472"} Mar 19 09:22:39.338511 master-0 kubenswrapper[7385]: I0319 09:22:39.338458 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.338511 master-0 kubenswrapper[7385]: I0319 09:22:39.338504 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.338779 master-0 kubenswrapper[7385]: I0319 09:22:39.338633 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jnj\" (UniqueName: \"kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.338779 master-0 kubenswrapper[7385]: I0319 09:22:39.338671 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.338779 master-0 kubenswrapper[7385]: I0319 09:22:39.338758 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.338917 master-0 kubenswrapper[7385]: I0319 09:22:39.338778 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.338917 master-0 kubenswrapper[7385]: I0319 09:22:39.338811 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.339988 master-0 kubenswrapper[7385]: I0319 09:22:39.339955 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.340365 master-0 kubenswrapper[7385]: I0319 09:22:39.340332 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.341325 master-0 kubenswrapper[7385]: I0319 09:22:39.341293 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.343871 master-0 kubenswrapper[7385]: I0319 09:22:39.343814 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.345022 master-0 kubenswrapper[7385]: I0319 09:22:39.344988 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.347380 master-0 kubenswrapper[7385]: I0319 09:22:39.346883 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.361084 master-0 kubenswrapper[7385]: I0319 09:22:39.361018 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" podStartSLOduration=2.077419965 podStartE2EDuration="9.361000521s" podCreationTimestamp="2026-03-19 09:22:30 +0000 UTC" firstStartedPulling="2026-03-19 09:22:31.473377921 +0000 UTC m=+247.147807662" lastFinishedPulling="2026-03-19 09:22:38.756958517 +0000 UTC m=+254.431388218" observedRunningTime="2026-03-19 09:22:39.359973948 +0000 UTC m=+255.034403649" watchObservedRunningTime="2026-03-19 09:22:39.361000521 +0000 UTC m=+255.035430212" Mar 19 09:22:39.374423 master-0 kubenswrapper[7385]: I0319 09:22:39.374007 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jnj\" (UniqueName: \"kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.562608 master-0 kubenswrapper[7385]: I0319 09:22:39.559162 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:39.733569 master-0 kubenswrapper[7385]: I0319 09:22:39.733500 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:39.733569 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:39.733569 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:39.733569 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:39.734121 master-0 kubenswrapper[7385]: I0319 09:22:39.733595 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:40.106481 master-0 kubenswrapper[7385]: I0319 09:22:40.106438 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7c64897fc5-qj6vj"] Mar 19 09:22:40.112118 master-0 kubenswrapper[7385]: W0319 09:22:40.112070 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ae3c935_4beb_4cc9_ba91_d82cac3148dd.slice/crio-2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4 WatchSource:0}: Error finding container 2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4: Status 404 returned error can't find the container with id 2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4 Mar 19 09:22:40.315763 master-0 kubenswrapper[7385]: I0319 09:22:40.315702 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" event={"ID":"1b230b9d-529c-4b28-bc73-659a28d7961a","Type":"ContainerStarted","Data":"f3f79ec3a7abc38790af2277cd8228c57b321c050fe8a7ce304a43ab1068c858"} Mar 19 09:22:40.317554 master-0 kubenswrapper[7385]: I0319 09:22:40.317509 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" event={"ID":"5ae3c935-4beb-4cc9-ba91-d82cac3148dd","Type":"ContainerStarted","Data":"2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4"} Mar 19 09:22:40.319709 master-0 kubenswrapper[7385]: I0319 09:22:40.319672 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k6kn8" event={"ID":"31e46a34-8a00-4bb3-869b-8a5911ef6cf8","Type":"ContainerStarted","Data":"1f956bfb7bedd24701c88aaa7f740ca983956f68bccdd185cf935490d03f7ea7"} Mar 19 09:22:40.319709 master-0 kubenswrapper[7385]: I0319 09:22:40.319709 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-k6kn8" event={"ID":"31e46a34-8a00-4bb3-869b-8a5911ef6cf8","Type":"ContainerStarted","Data":"6e6707dceed67faf2e08be04f50829637152a463fa7f84bd40cdb5a74aa6f9b9"} Mar 19 09:22:40.332237 master-0 kubenswrapper[7385]: I0319 09:22:40.332159 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" podStartSLOduration=4.941378629 podStartE2EDuration="10.332141866s" podCreationTimestamp="2026-03-19 09:22:30 +0000 UTC" firstStartedPulling="2026-03-19 09:22:34.354260997 +0000 UTC m=+250.028690698" lastFinishedPulling="2026-03-19 09:22:39.745024234 +0000 UTC m=+255.419453935" observedRunningTime="2026-03-19 09:22:40.331206715 +0000 UTC m=+256.005636426" watchObservedRunningTime="2026-03-19 09:22:40.332141866 +0000 UTC m=+256.006571567" Mar 19 09:22:40.351053 master-0 kubenswrapper[7385]: I0319 09:22:40.350982 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-k6kn8" podStartSLOduration=2.60899088 podStartE2EDuration="10.350966272s" podCreationTimestamp="2026-03-19 09:22:30 +0000 UTC" firstStartedPulling="2026-03-19 09:22:30.978241371 +0000 UTC m=+246.652671082" lastFinishedPulling="2026-03-19 09:22:38.720216773 +0000 UTC m=+254.394646474" observedRunningTime="2026-03-19 09:22:40.347607942 +0000 UTC m=+256.022037643" watchObservedRunningTime="2026-03-19 09:22:40.350966272 +0000 UTC m=+256.025395963" Mar 19 09:22:40.730756 master-0 kubenswrapper[7385]: I0319 09:22:40.730720 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:40.730756 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:40.730756 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:40.730756 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:40.730984 master-0 kubenswrapper[7385]: I0319 09:22:40.730777 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:41.730687 master-0 kubenswrapper[7385]: I0319 09:22:41.730640 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:41.730687 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:41.730687 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:41.730687 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:41.744916 master-0 kubenswrapper[7385]: I0319 09:22:41.730708 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:42.340823 master-0 kubenswrapper[7385]: I0319 09:22:42.340754 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" event={"ID":"5ae3c935-4beb-4cc9-ba91-d82cac3148dd","Type":"ContainerStarted","Data":"b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4"} Mar 19 09:22:42.361587 master-0 kubenswrapper[7385]: I0319 09:22:42.361475 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" podStartSLOduration=1.831497203 podStartE2EDuration="3.361451162s" podCreationTimestamp="2026-03-19 09:22:39 +0000 UTC" firstStartedPulling="2026-03-19 09:22:40.118632596 +0000 UTC m=+255.793062297" lastFinishedPulling="2026-03-19 09:22:41.648586555 +0000 UTC m=+257.323016256" observedRunningTime="2026-03-19 09:22:42.359746367 +0000 UTC m=+258.034176148" watchObservedRunningTime="2026-03-19 09:22:42.361451162 +0000 UTC m=+258.035880893" Mar 19 09:22:42.730983 master-0 kubenswrapper[7385]: I0319 09:22:42.730892 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:42.730983 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:42.730983 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:42.730983 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:42.731849 master-0 kubenswrapper[7385]: I0319 09:22:42.731004 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:43.731956 master-0 kubenswrapper[7385]: I0319 09:22:43.731857 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:43.731956 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:43.731956 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:43.731956 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:43.732942 master-0 kubenswrapper[7385]: I0319 09:22:43.731967 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:44.732797 master-0 kubenswrapper[7385]: I0319 09:22:44.732729 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:44.732797 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:44.732797 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:44.732797 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:44.732797 master-0 kubenswrapper[7385]: I0319 09:22:44.732797 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:45.731207 master-0 kubenswrapper[7385]: I0319 09:22:45.731104 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:45.731207 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:45.731207 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:45.731207 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:45.731682 master-0 kubenswrapper[7385]: I0319 09:22:45.731223 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:46.731104 master-0 kubenswrapper[7385]: I0319 09:22:46.731002 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:46.731104 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:46.731104 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:46.731104 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:46.731104 master-0 kubenswrapper[7385]: I0319 09:22:46.731066 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:47.529705 master-0 kubenswrapper[7385]: I0319 09:22:47.529642 7385 scope.go:117] "RemoveContainer" containerID="07f85a8394cfe2927824d6dd40beca1cf31136db472d1b09c7b6f5f1e6dae94f" Mar 19 09:22:47.731520 master-0 kubenswrapper[7385]: I0319 09:22:47.731457 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:47.731520 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:47.731520 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:47.731520 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:47.731520 master-0 kubenswrapper[7385]: I0319 09:22:47.731515 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:48.381347 master-0 kubenswrapper[7385]: I0319 09:22:48.381262 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerStarted","Data":"3091cd39c91635e4ee1ea702b34d340a7966feb6a8a53ede843ba60081ff82bc"} Mar 19 09:22:48.732415 master-0 kubenswrapper[7385]: I0319 09:22:48.732266 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:48.732415 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:48.732415 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:48.732415 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:48.732415 master-0 kubenswrapper[7385]: I0319 09:22:48.732348 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:49.731181 master-0 kubenswrapper[7385]: I0319 09:22:49.731093 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:49.731181 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:49.731181 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:49.731181 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:49.731659 master-0 kubenswrapper[7385]: I0319 09:22:49.731198 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:50.730913 master-0 kubenswrapper[7385]: I0319 09:22:50.730837 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:50.730913 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:50.730913 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:50.730913 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:50.731558 master-0 kubenswrapper[7385]: I0319 09:22:50.730923 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:51.731291 master-0 kubenswrapper[7385]: I0319 09:22:51.731208 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:51.731291 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:51.731291 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:51.731291 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:51.731291 master-0 kubenswrapper[7385]: I0319 09:22:51.731286 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:52.731296 master-0 kubenswrapper[7385]: I0319 09:22:52.731233 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:52.731296 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:52.731296 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:52.731296 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:52.731296 master-0 kubenswrapper[7385]: I0319 09:22:52.731287 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:53.730161 master-0 kubenswrapper[7385]: I0319 09:22:53.730120 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:53.730161 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:53.730161 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:53.730161 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:53.730478 master-0 kubenswrapper[7385]: I0319 09:22:53.730454 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:54.732557 master-0 kubenswrapper[7385]: I0319 09:22:54.732494 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:54.732557 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:54.732557 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:54.732557 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:54.733059 master-0 kubenswrapper[7385]: I0319 09:22:54.732582 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:55.731148 master-0 kubenswrapper[7385]: I0319 09:22:55.731028 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:55.731148 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:55.731148 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:55.731148 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:55.731148 master-0 kubenswrapper[7385]: I0319 09:22:55.731100 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:56.732369 master-0 kubenswrapper[7385]: I0319 09:22:56.732251 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:56.732369 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:56.732369 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:56.732369 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:56.733363 master-0 kubenswrapper[7385]: I0319 09:22:56.732398 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:57.730227 master-0 kubenswrapper[7385]: I0319 09:22:57.730166 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:57.730227 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:57.730227 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:57.730227 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:57.730602 master-0 kubenswrapper[7385]: I0319 09:22:57.730234 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:58.731277 master-0 kubenswrapper[7385]: I0319 09:22:58.731204 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:58.731277 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:58.731277 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:58.731277 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:58.732343 master-0 kubenswrapper[7385]: I0319 09:22:58.731283 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:22:59.560056 master-0 kubenswrapper[7385]: I0319 09:22:59.560002 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:59.560056 master-0 kubenswrapper[7385]: I0319 09:22:59.560058 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:22:59.731608 master-0 kubenswrapper[7385]: I0319 09:22:59.731531 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:22:59.731608 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:22:59.731608 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:22:59.731608 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:22:59.731608 master-0 kubenswrapper[7385]: I0319 09:22:59.731609 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:00.733606 master-0 kubenswrapper[7385]: I0319 09:23:00.732977 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:00.733606 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:00.733606 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:00.733606 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:00.733606 master-0 kubenswrapper[7385]: I0319 09:23:00.733091 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:01.731618 master-0 kubenswrapper[7385]: I0319 09:23:01.731480 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:01.731618 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:01.731618 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:01.731618 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:01.732076 master-0 kubenswrapper[7385]: I0319 09:23:01.731651 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:02.732071 master-0 kubenswrapper[7385]: I0319 09:23:02.731975 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:02.732071 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:02.732071 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:02.732071 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:02.733100 master-0 kubenswrapper[7385]: I0319 09:23:02.732101 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:03.733589 master-0 kubenswrapper[7385]: I0319 09:23:03.733477 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:03.733589 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:03.733589 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:03.733589 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:03.734613 master-0 kubenswrapper[7385]: I0319 09:23:03.733643 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:04.730105 master-0 kubenswrapper[7385]: I0319 09:23:04.730054 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:04.730105 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:04.730105 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:04.730105 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:04.730394 master-0 kubenswrapper[7385]: I0319 09:23:04.730115 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:05.731012 master-0 kubenswrapper[7385]: I0319 09:23:05.730928 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:05.731012 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:05.731012 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:05.731012 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:05.732070 master-0 kubenswrapper[7385]: I0319 09:23:05.731040 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:06.731023 master-0 kubenswrapper[7385]: I0319 09:23:06.730953 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:06.731023 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:06.731023 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:06.731023 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:06.731774 master-0 kubenswrapper[7385]: I0319 09:23:06.731046 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:07.730487 master-0 kubenswrapper[7385]: I0319 09:23:07.730433 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:07.730487 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:07.730487 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:07.730487 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:07.730874 master-0 kubenswrapper[7385]: I0319 09:23:07.730519 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:08.731865 master-0 kubenswrapper[7385]: I0319 09:23:08.731799 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:08.731865 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:08.731865 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:08.731865 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:08.732914 master-0 kubenswrapper[7385]: I0319 09:23:08.731913 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:09.731643 master-0 kubenswrapper[7385]: I0319 09:23:09.731500 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:09.731643 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:09.731643 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:09.731643 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:09.732612 master-0 kubenswrapper[7385]: I0319 09:23:09.731652 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:10.730296 master-0 kubenswrapper[7385]: I0319 09:23:10.730235 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:10.730296 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:10.730296 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:10.730296 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:10.730508 master-0 kubenswrapper[7385]: I0319 09:23:10.730336 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:11.731204 master-0 kubenswrapper[7385]: I0319 09:23:11.731106 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:11.731204 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:11.731204 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:11.731204 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:11.731204 master-0 kubenswrapper[7385]: I0319 09:23:11.731192 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:12.730908 master-0 kubenswrapper[7385]: I0319 09:23:12.730842 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:12.730908 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:12.730908 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:12.730908 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:12.731255 master-0 kubenswrapper[7385]: I0319 09:23:12.730949 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:13.730805 master-0 kubenswrapper[7385]: I0319 09:23:13.730758 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:13.730805 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:13.730805 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:13.730805 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:13.731175 master-0 kubenswrapper[7385]: I0319 09:23:13.730816 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:14.731864 master-0 kubenswrapper[7385]: I0319 09:23:14.731798 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:14.731864 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:14.731864 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:14.731864 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:14.732423 master-0 kubenswrapper[7385]: I0319 09:23:14.731889 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:15.731886 master-0 kubenswrapper[7385]: I0319 09:23:15.731816 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:15.731886 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:15.731886 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:15.731886 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:15.732518 master-0 kubenswrapper[7385]: I0319 09:23:15.731907 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:16.732768 master-0 kubenswrapper[7385]: I0319 09:23:16.732682 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:16.732768 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:16.732768 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:16.732768 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:16.733737 master-0 kubenswrapper[7385]: I0319 09:23:16.732826 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:17.732081 master-0 kubenswrapper[7385]: I0319 09:23:17.732013 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:17.732081 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:17.732081 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:17.732081 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:17.732747 master-0 kubenswrapper[7385]: I0319 09:23:17.732097 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:18.731627 master-0 kubenswrapper[7385]: I0319 09:23:18.731523 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:18.731627 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:18.731627 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:18.731627 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:18.732754 master-0 kubenswrapper[7385]: I0319 09:23:18.732704 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:19.566856 master-0 kubenswrapper[7385]: I0319 09:23:19.566787 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:23:19.572630 master-0 kubenswrapper[7385]: I0319 09:23:19.571857 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:23:19.729956 master-0 kubenswrapper[7385]: I0319 09:23:19.729900 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:19.729956 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:19.729956 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:19.729956 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:19.730228 master-0 kubenswrapper[7385]: I0319 09:23:19.729958 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:20.731734 master-0 kubenswrapper[7385]: I0319 09:23:20.731627 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:20.731734 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:20.731734 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:20.731734 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:20.731734 master-0 kubenswrapper[7385]: I0319 09:23:20.731704 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:21.730832 master-0 kubenswrapper[7385]: I0319 09:23:21.730762 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:21.730832 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:21.730832 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:21.730832 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:21.731141 master-0 kubenswrapper[7385]: I0319 09:23:21.730852 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:22.730769 master-0 kubenswrapper[7385]: I0319 09:23:22.730690 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:22.730769 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:22.730769 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:22.730769 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:22.732087 master-0 kubenswrapper[7385]: I0319 09:23:22.730775 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:23.731579 master-0 kubenswrapper[7385]: I0319 09:23:23.731060 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:23.731579 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:23.731579 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:23.731579 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:23.731579 master-0 kubenswrapper[7385]: I0319 09:23:23.731222 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:24.731432 master-0 kubenswrapper[7385]: I0319 09:23:24.731042 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:24.731432 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:24.731432 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:24.731432 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:24.731432 master-0 kubenswrapper[7385]: I0319 09:23:24.731115 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:24.908968 master-0 kubenswrapper[7385]: I0319 09:23:24.908893 7385 scope.go:117] "RemoveContainer" containerID="e2912f5a07027e593c03c831722de1c74b974cbf7fe0986009830ada22289435" Mar 19 09:23:25.731941 master-0 kubenswrapper[7385]: I0319 09:23:25.731844 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:25.731941 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:25.731941 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:25.731941 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:25.733474 master-0 kubenswrapper[7385]: I0319 09:23:25.731973 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:26.731333 master-0 kubenswrapper[7385]: I0319 09:23:26.731262 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:26.731333 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:26.731333 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:26.731333 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:26.731795 master-0 kubenswrapper[7385]: I0319 09:23:26.731339 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:27.731531 master-0 kubenswrapper[7385]: I0319 09:23:27.731466 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:27.731531 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:27.731531 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:27.731531 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:27.732640 master-0 kubenswrapper[7385]: I0319 09:23:27.732531 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:28.730598 master-0 kubenswrapper[7385]: I0319 09:23:28.730513 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:28.730598 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:28.730598 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:28.730598 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:28.730908 master-0 kubenswrapper[7385]: I0319 09:23:28.730607 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:29.731173 master-0 kubenswrapper[7385]: I0319 09:23:29.731072 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:29.731173 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:29.731173 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:29.731173 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:29.731792 master-0 kubenswrapper[7385]: I0319 09:23:29.731188 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:30.731259 master-0 kubenswrapper[7385]: I0319 09:23:30.731193 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:30.731259 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:30.731259 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:30.731259 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:30.732487 master-0 kubenswrapper[7385]: I0319 09:23:30.731286 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:31.947925 master-0 kubenswrapper[7385]: I0319 09:23:31.947796 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:31.947925 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:31.947925 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:31.947925 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:31.947925 master-0 kubenswrapper[7385]: I0319 09:23:31.947923 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:32.774750 master-0 kubenswrapper[7385]: I0319 09:23:32.774613 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:32.774750 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:32.774750 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:32.774750 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:32.774750 master-0 kubenswrapper[7385]: I0319 09:23:32.774705 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:33.730628 master-0 kubenswrapper[7385]: I0319 09:23:33.730567 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:33.730628 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:33.730628 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:33.730628 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:33.731302 master-0 kubenswrapper[7385]: I0319 09:23:33.730642 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:34.738281 master-0 kubenswrapper[7385]: I0319 09:23:34.738236 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-6r9c4"] Mar 19 09:23:34.741365 master-0 kubenswrapper[7385]: I0319 09:23:34.739001 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:34.741365 master-0 kubenswrapper[7385]: I0319 09:23:34.739704 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:34.741365 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:34.741365 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:34.741365 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:34.741365 master-0 kubenswrapper[7385]: I0319 09:23:34.739733 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:34.741902 master-0 kubenswrapper[7385]: I0319 09:23:34.741864 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 09:23:34.742315 master-0 kubenswrapper[7385]: I0319 09:23:34.742284 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4d2wn" Mar 19 09:23:34.742440 master-0 kubenswrapper[7385]: I0319 09:23:34.742423 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 09:23:34.742845 master-0 kubenswrapper[7385]: I0319 09:23:34.742801 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 09:23:34.760486 master-0 kubenswrapper[7385]: I0319 09:23:34.755909 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6r9c4"] Mar 19 09:23:34.837807 master-0 kubenswrapper[7385]: I0319 09:23:34.837757 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:34.838002 master-0 kubenswrapper[7385]: I0319 09:23:34.837833 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqc86\" (UniqueName: \"kubernetes.io/projected/9d3fd276-2fe2-423a-b1ee-f27f1596d013-kube-api-access-cqc86\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:34.939870 master-0 kubenswrapper[7385]: I0319 09:23:34.939750 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:34.939870 master-0 kubenswrapper[7385]: I0319 09:23:34.939812 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqc86\" (UniqueName: \"kubernetes.io/projected/9d3fd276-2fe2-423a-b1ee-f27f1596d013-kube-api-access-cqc86\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:34.945760 master-0 kubenswrapper[7385]: I0319 09:23:34.945712 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:34.956751 master-0 kubenswrapper[7385]: I0319 09:23:34.956726 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqc86\" (UniqueName: \"kubernetes.io/projected/9d3fd276-2fe2-423a-b1ee-f27f1596d013-kube-api-access-cqc86\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:35.039266 master-0 kubenswrapper[7385]: I0319 09:23:35.039230 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/0.log" Mar 19 09:23:35.039266 master-0 kubenswrapper[7385]: I0319 09:23:35.039269 7385 generic.go:334] "Generic (PLEG): container finished" podID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" containerID="c8c2b59685bc30549de9bfd2d5a139e18ceba9f4c5c0b572b2cf26e45dd85e1b" exitCode=1 Mar 19 09:23:35.039506 master-0 kubenswrapper[7385]: I0319 09:23:35.039294 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerDied","Data":"c8c2b59685bc30549de9bfd2d5a139e18ceba9f4c5c0b572b2cf26e45dd85e1b"} Mar 19 09:23:35.039725 master-0 kubenswrapper[7385]: I0319 09:23:35.039707 7385 scope.go:117] "RemoveContainer" containerID="c8c2b59685bc30549de9bfd2d5a139e18ceba9f4c5c0b572b2cf26e45dd85e1b" Mar 19 09:23:35.096175 master-0 kubenswrapper[7385]: I0319 09:23:35.096122 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:23:35.518626 master-0 kubenswrapper[7385]: I0319 09:23:35.518480 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-6r9c4"] Mar 19 09:23:35.524289 master-0 kubenswrapper[7385]: W0319 09:23:35.524239 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d3fd276_2fe2_423a_b1ee_f27f1596d013.slice/crio-ba384c9cdc57f87a975d87b2de9f0cfa5598c8a35123c7bc925dcebbf60a5093 WatchSource:0}: Error finding container ba384c9cdc57f87a975d87b2de9f0cfa5598c8a35123c7bc925dcebbf60a5093: Status 404 returned error can't find the container with id ba384c9cdc57f87a975d87b2de9f0cfa5598c8a35123c7bc925dcebbf60a5093 Mar 19 09:23:35.730713 master-0 kubenswrapper[7385]: I0319 09:23:35.730662 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:35.730713 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:35.730713 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:35.730713 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:35.731003 master-0 kubenswrapper[7385]: I0319 09:23:35.730719 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:36.050154 master-0 kubenswrapper[7385]: I0319 09:23:36.050124 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/0.log" Mar 19 09:23:36.050713 master-0 kubenswrapper[7385]: I0319 09:23:36.050690 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"88c0357aa022be581857f74b9852a9a65a3b84fe610ed6b9bc79f94f9ef05744"} Mar 19 09:23:36.053235 master-0 kubenswrapper[7385]: I0319 09:23:36.053185 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6r9c4" event={"ID":"9d3fd276-2fe2-423a-b1ee-f27f1596d013","Type":"ContainerStarted","Data":"3e8763f2a8d43a6b7db72179814a076f8205ed3fb226991fe61df7da1aae6c72"} Mar 19 09:23:36.053315 master-0 kubenswrapper[7385]: I0319 09:23:36.053246 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-6r9c4" event={"ID":"9d3fd276-2fe2-423a-b1ee-f27f1596d013","Type":"ContainerStarted","Data":"ba384c9cdc57f87a975d87b2de9f0cfa5598c8a35123c7bc925dcebbf60a5093"} Mar 19 09:23:36.106055 master-0 kubenswrapper[7385]: I0319 09:23:36.105933 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-6r9c4" podStartSLOduration=2.105784986 podStartE2EDuration="2.105784986s" podCreationTimestamp="2026-03-19 09:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:23:36.099866985 +0000 UTC m=+311.774296686" watchObservedRunningTime="2026-03-19 09:23:36.105784986 +0000 UTC m=+311.780214737" Mar 19 09:23:36.732083 master-0 kubenswrapper[7385]: I0319 09:23:36.731995 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:36.732083 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:36.732083 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:36.732083 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:36.732083 master-0 kubenswrapper[7385]: I0319 09:23:36.732063 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:37.732013 master-0 kubenswrapper[7385]: I0319 09:23:37.731942 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:37.732013 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:37.732013 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:37.732013 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:37.733147 master-0 kubenswrapper[7385]: I0319 09:23:37.732854 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:38.731487 master-0 kubenswrapper[7385]: I0319 09:23:38.731415 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:38.731487 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:38.731487 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:38.731487 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:38.731967 master-0 kubenswrapper[7385]: I0319 09:23:38.731528 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:39.731128 master-0 kubenswrapper[7385]: I0319 09:23:39.731047 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:39.731128 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:39.731128 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:39.731128 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:39.731942 master-0 kubenswrapper[7385]: I0319 09:23:39.731150 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:40.731715 master-0 kubenswrapper[7385]: I0319 09:23:40.731651 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:40.731715 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:40.731715 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:40.731715 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:40.731715 master-0 kubenswrapper[7385]: I0319 09:23:40.731717 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:41.732455 master-0 kubenswrapper[7385]: I0319 09:23:41.732328 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:41.732455 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:41.732455 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:41.732455 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:41.733595 master-0 kubenswrapper[7385]: I0319 09:23:41.732654 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:42.731245 master-0 kubenswrapper[7385]: I0319 09:23:42.731105 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:42.731245 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:42.731245 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:42.731245 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:42.731925 master-0 kubenswrapper[7385]: I0319 09:23:42.731258 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:43.731266 master-0 kubenswrapper[7385]: I0319 09:23:43.731188 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:43.731266 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:43.731266 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:43.731266 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:43.731934 master-0 kubenswrapper[7385]: I0319 09:23:43.731302 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:44.731076 master-0 kubenswrapper[7385]: I0319 09:23:44.730985 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:44.731076 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:44.731076 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:44.731076 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:44.731076 master-0 kubenswrapper[7385]: I0319 09:23:44.731038 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:45.731534 master-0 kubenswrapper[7385]: I0319 09:23:45.731472 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:45.731534 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:45.731534 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:45.731534 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:45.732822 master-0 kubenswrapper[7385]: I0319 09:23:45.731587 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:46.731138 master-0 kubenswrapper[7385]: I0319 09:23:46.731084 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:46.731138 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:46.731138 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:46.731138 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:46.731430 master-0 kubenswrapper[7385]: I0319 09:23:46.731162 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:47.731813 master-0 kubenswrapper[7385]: I0319 09:23:47.731741 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:47.731813 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:47.731813 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:47.731813 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:47.732362 master-0 kubenswrapper[7385]: I0319 09:23:47.731847 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:48.731284 master-0 kubenswrapper[7385]: I0319 09:23:48.731122 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:48.731284 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:48.731284 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:48.731284 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:48.732706 master-0 kubenswrapper[7385]: I0319 09:23:48.731308 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:49.730694 master-0 kubenswrapper[7385]: I0319 09:23:49.730610 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:49.730694 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:49.730694 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:49.730694 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:49.730694 master-0 kubenswrapper[7385]: I0319 09:23:49.730681 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:50.732450 master-0 kubenswrapper[7385]: I0319 09:23:50.731204 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:50.732450 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:50.732450 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:50.732450 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:50.732450 master-0 kubenswrapper[7385]: I0319 09:23:50.731264 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:51.730931 master-0 kubenswrapper[7385]: I0319 09:23:51.730840 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:51.730931 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:51.730931 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:51.730931 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:51.730931 master-0 kubenswrapper[7385]: I0319 09:23:51.730902 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:52.732081 master-0 kubenswrapper[7385]: I0319 09:23:52.731983 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:52.732081 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:52.732081 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:52.732081 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:52.733303 master-0 kubenswrapper[7385]: I0319 09:23:52.732094 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:53.731251 master-0 kubenswrapper[7385]: I0319 09:23:53.731177 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:53.731251 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:53.731251 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:53.731251 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:53.734699 master-0 kubenswrapper[7385]: I0319 09:23:53.731265 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:54.732181 master-0 kubenswrapper[7385]: I0319 09:23:54.732103 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:54.732181 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:54.732181 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:54.732181 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:54.732637 master-0 kubenswrapper[7385]: I0319 09:23:54.732208 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:55.731433 master-0 kubenswrapper[7385]: I0319 09:23:55.731325 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:55.731433 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:55.731433 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:55.731433 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:55.732652 master-0 kubenswrapper[7385]: I0319 09:23:55.731434 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:56.731879 master-0 kubenswrapper[7385]: I0319 09:23:56.731805 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:56.731879 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:56.731879 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:56.731879 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:56.732913 master-0 kubenswrapper[7385]: I0319 09:23:56.731892 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:57.731822 master-0 kubenswrapper[7385]: I0319 09:23:57.731747 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:57.731822 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:57.731822 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:57.731822 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:57.731822 master-0 kubenswrapper[7385]: I0319 09:23:57.731824 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:58.731791 master-0 kubenswrapper[7385]: I0319 09:23:58.731711 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:58.731791 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:58.731791 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:58.731791 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:58.732060 master-0 kubenswrapper[7385]: I0319 09:23:58.731819 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:23:59.731712 master-0 kubenswrapper[7385]: I0319 09:23:59.731653 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:23:59.731712 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:23:59.731712 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:23:59.731712 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:23:59.732629 master-0 kubenswrapper[7385]: I0319 09:23:59.731747 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:00.731760 master-0 kubenswrapper[7385]: I0319 09:24:00.731663 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:00.731760 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:00.731760 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:00.731760 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:00.732798 master-0 kubenswrapper[7385]: I0319 09:24:00.731801 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:01.732076 master-0 kubenswrapper[7385]: I0319 09:24:01.731997 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:01.732076 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:01.732076 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:01.732076 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:01.733142 master-0 kubenswrapper[7385]: I0319 09:24:01.732087 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:02.731319 master-0 kubenswrapper[7385]: I0319 09:24:02.731262 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:02.731319 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:02.731319 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:02.731319 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:02.731687 master-0 kubenswrapper[7385]: I0319 09:24:02.731365 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:03.730601 master-0 kubenswrapper[7385]: I0319 09:24:03.730514 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:03.730601 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:03.730601 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:03.730601 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:03.731710 master-0 kubenswrapper[7385]: I0319 09:24:03.730617 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:04.731113 master-0 kubenswrapper[7385]: I0319 09:24:04.731036 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:04.731113 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:04.731113 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:04.731113 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:04.731769 master-0 kubenswrapper[7385]: I0319 09:24:04.731134 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:05.732151 master-0 kubenswrapper[7385]: I0319 09:24:05.731592 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:05.732151 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:05.732151 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:05.732151 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:05.732151 master-0 kubenswrapper[7385]: I0319 09:24:05.731700 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:06.732792 master-0 kubenswrapper[7385]: I0319 09:24:06.732732 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:06.732792 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:06.732792 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:06.732792 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:06.734004 master-0 kubenswrapper[7385]: I0319 09:24:06.733957 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:07.731410 master-0 kubenswrapper[7385]: I0319 09:24:07.731347 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:07.731410 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:07.731410 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:07.731410 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:07.731717 master-0 kubenswrapper[7385]: I0319 09:24:07.731434 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:08.732184 master-0 kubenswrapper[7385]: I0319 09:24:08.732098 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:08.732184 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:08.732184 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:08.732184 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:08.732184 master-0 kubenswrapper[7385]: I0319 09:24:08.732187 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:09.730914 master-0 kubenswrapper[7385]: I0319 09:24:09.730854 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:09.730914 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:09.730914 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:09.730914 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:09.730914 master-0 kubenswrapper[7385]: I0319 09:24:09.730918 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:10.730936 master-0 kubenswrapper[7385]: I0319 09:24:10.730855 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:10.730936 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:10.730936 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:10.730936 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:10.731995 master-0 kubenswrapper[7385]: I0319 09:24:10.730988 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:11.730934 master-0 kubenswrapper[7385]: I0319 09:24:11.730860 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:11.730934 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:11.730934 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:11.730934 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:11.730934 master-0 kubenswrapper[7385]: I0319 09:24:11.730948 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:12.731960 master-0 kubenswrapper[7385]: I0319 09:24:12.731877 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:12.731960 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:12.731960 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:12.731960 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:12.732745 master-0 kubenswrapper[7385]: I0319 09:24:12.731978 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:13.732165 master-0 kubenswrapper[7385]: I0319 09:24:13.732068 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:13.732165 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:13.732165 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:13.732165 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:13.732165 master-0 kubenswrapper[7385]: I0319 09:24:13.732160 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:14.732616 master-0 kubenswrapper[7385]: I0319 09:24:14.732452 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:14.732616 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:14.732616 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:14.732616 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:14.733792 master-0 kubenswrapper[7385]: I0319 09:24:14.732864 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:15.730948 master-0 kubenswrapper[7385]: I0319 09:24:15.730887 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:15.730948 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:15.730948 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:15.730948 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:15.731240 master-0 kubenswrapper[7385]: I0319 09:24:15.730980 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:16.730936 master-0 kubenswrapper[7385]: I0319 09:24:16.730890 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:16.730936 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:16.730936 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:16.730936 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:16.731591 master-0 kubenswrapper[7385]: I0319 09:24:16.731559 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:17.731015 master-0 kubenswrapper[7385]: I0319 09:24:17.730923 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:17.731015 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:17.731015 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:17.731015 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:17.732201 master-0 kubenswrapper[7385]: I0319 09:24:17.731053 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:18.731510 master-0 kubenswrapper[7385]: I0319 09:24:18.731407 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:18.731510 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:18.731510 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:18.731510 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:18.731510 master-0 kubenswrapper[7385]: I0319 09:24:18.731489 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:19.730725 master-0 kubenswrapper[7385]: I0319 09:24:19.730667 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:24:19.730725 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:24:19.730725 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:24:19.730725 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:24:19.731261 master-0 kubenswrapper[7385]: I0319 09:24:19.730736 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:24:19.731261 master-0 kubenswrapper[7385]: I0319 09:24:19.730779 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:24:19.731399 master-0 kubenswrapper[7385]: I0319 09:24:19.731261 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"5cb6c10ede1632045f4c6b7b809db52b73fe2590e0eca9bb5097244794291556"} pod="openshift-ingress/router-default-7dcf5569b5-k99cg" containerMessage="Container router failed startup probe, will be restarted" Mar 19 09:24:19.731399 master-0 kubenswrapper[7385]: I0319 09:24:19.731300 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" containerID="cri-o://5cb6c10ede1632045f4c6b7b809db52b73fe2590e0eca9bb5097244794291556" gracePeriod=3600 Mar 19 09:25:06.677867 master-0 kubenswrapper[7385]: I0319 09:25:06.677762 7385 generic.go:334] "Generic (PLEG): container finished" podID="57227a66-c758-4a46-a5e1-f603baa3f570" containerID="5cb6c10ede1632045f4c6b7b809db52b73fe2590e0eca9bb5097244794291556" exitCode=0 Mar 19 09:25:06.677867 master-0 kubenswrapper[7385]: I0319 09:25:06.677842 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerDied","Data":"5cb6c10ede1632045f4c6b7b809db52b73fe2590e0eca9bb5097244794291556"} Mar 19 09:25:06.679160 master-0 kubenswrapper[7385]: I0319 09:25:06.677898 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerStarted","Data":"3a5dd314e61c7e5e336d52053d0330f63d21f00e76686c7b0a177fb71dc220dc"} Mar 19 09:25:06.729109 master-0 kubenswrapper[7385]: I0319 09:25:06.728997 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:25:06.734467 master-0 kubenswrapper[7385]: I0319 09:25:06.733707 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:06.734467 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:06.734467 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:06.734467 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:06.734467 master-0 kubenswrapper[7385]: I0319 09:25:06.733784 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:07.732076 master-0 kubenswrapper[7385]: I0319 09:25:07.731987 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:07.732076 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:07.732076 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:07.732076 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:07.733070 master-0 kubenswrapper[7385]: I0319 09:25:07.732080 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:08.732782 master-0 kubenswrapper[7385]: I0319 09:25:08.732688 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:08.732782 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:08.732782 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:08.732782 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:08.734302 master-0 kubenswrapper[7385]: I0319 09:25:08.732802 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:09.729008 master-0 kubenswrapper[7385]: I0319 09:25:09.728862 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:25:09.732256 master-0 kubenswrapper[7385]: I0319 09:25:09.732189 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:09.732256 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:09.732256 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:09.732256 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:09.732701 master-0 kubenswrapper[7385]: I0319 09:25:09.732280 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:10.731887 master-0 kubenswrapper[7385]: I0319 09:25:10.731743 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:10.731887 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:10.731887 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:10.731887 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:10.731887 master-0 kubenswrapper[7385]: I0319 09:25:10.731884 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:11.731618 master-0 kubenswrapper[7385]: I0319 09:25:11.731510 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:11.731618 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:11.731618 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:11.731618 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:11.731618 master-0 kubenswrapper[7385]: I0319 09:25:11.731605 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:12.731346 master-0 kubenswrapper[7385]: I0319 09:25:12.731269 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:12.731346 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:12.731346 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:12.731346 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:12.732701 master-0 kubenswrapper[7385]: I0319 09:25:12.731375 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:13.732594 master-0 kubenswrapper[7385]: I0319 09:25:13.731393 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:13.732594 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:13.732594 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:13.732594 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:13.732594 master-0 kubenswrapper[7385]: I0319 09:25:13.731503 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:14.732002 master-0 kubenswrapper[7385]: I0319 09:25:14.731859 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:14.732002 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:14.732002 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:14.732002 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:14.732002 master-0 kubenswrapper[7385]: I0319 09:25:14.731955 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:15.730825 master-0 kubenswrapper[7385]: I0319 09:25:15.730728 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:15.730825 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:15.730825 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:15.730825 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:15.731491 master-0 kubenswrapper[7385]: I0319 09:25:15.730882 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:16.731725 master-0 kubenswrapper[7385]: I0319 09:25:16.731627 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:16.731725 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:16.731725 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:16.731725 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:16.732952 master-0 kubenswrapper[7385]: I0319 09:25:16.731751 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:17.730939 master-0 kubenswrapper[7385]: I0319 09:25:17.730877 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:17.730939 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:17.730939 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:17.730939 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:17.731236 master-0 kubenswrapper[7385]: I0319 09:25:17.730994 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:18.731554 master-0 kubenswrapper[7385]: I0319 09:25:18.731476 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:18.731554 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:18.731554 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:18.731554 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:18.732307 master-0 kubenswrapper[7385]: I0319 09:25:18.731573 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:19.730516 master-0 kubenswrapper[7385]: I0319 09:25:19.730441 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:19.730516 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:19.730516 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:19.730516 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:19.730516 master-0 kubenswrapper[7385]: I0319 09:25:19.730493 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:20.730867 master-0 kubenswrapper[7385]: I0319 09:25:20.730779 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:20.730867 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:20.730867 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:20.730867 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:20.730867 master-0 kubenswrapper[7385]: I0319 09:25:20.730860 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:21.731629 master-0 kubenswrapper[7385]: I0319 09:25:21.731522 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:21.731629 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:21.731629 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:21.731629 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:21.732308 master-0 kubenswrapper[7385]: I0319 09:25:21.731651 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:22.731310 master-0 kubenswrapper[7385]: I0319 09:25:22.731193 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:22.731310 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:22.731310 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:22.731310 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:22.732422 master-0 kubenswrapper[7385]: I0319 09:25:22.731300 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:23.731115 master-0 kubenswrapper[7385]: I0319 09:25:23.731042 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:23.731115 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:23.731115 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:23.731115 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:23.731115 master-0 kubenswrapper[7385]: I0319 09:25:23.731121 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:24.734427 master-0 kubenswrapper[7385]: I0319 09:25:24.734294 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:24.734427 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:24.734427 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:24.734427 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:24.734427 master-0 kubenswrapper[7385]: I0319 09:25:24.734375 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:25.732068 master-0 kubenswrapper[7385]: I0319 09:25:25.731989 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:25.732068 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:25.732068 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:25.732068 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:25.732478 master-0 kubenswrapper[7385]: I0319 09:25:25.732063 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:26.731943 master-0 kubenswrapper[7385]: I0319 09:25:26.731857 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:26.731943 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:26.731943 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:26.731943 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:26.734151 master-0 kubenswrapper[7385]: I0319 09:25:26.731972 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:27.732230 master-0 kubenswrapper[7385]: I0319 09:25:27.732139 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:27.732230 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:27.732230 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:27.732230 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:27.733297 master-0 kubenswrapper[7385]: I0319 09:25:27.732279 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:28.730645 master-0 kubenswrapper[7385]: I0319 09:25:28.730576 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:28.730645 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:28.730645 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:28.730645 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:28.731183 master-0 kubenswrapper[7385]: I0319 09:25:28.730665 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:29.732130 master-0 kubenswrapper[7385]: I0319 09:25:29.731986 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:29.732130 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:29.732130 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:29.732130 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:29.733100 master-0 kubenswrapper[7385]: I0319 09:25:29.732158 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:30.730653 master-0 kubenswrapper[7385]: I0319 09:25:30.730583 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:30.730653 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:30.730653 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:30.730653 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:30.730653 master-0 kubenswrapper[7385]: I0319 09:25:30.730653 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:31.731061 master-0 kubenswrapper[7385]: I0319 09:25:31.730939 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:31.731061 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:31.731061 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:31.731061 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:31.732078 master-0 kubenswrapper[7385]: I0319 09:25:31.731062 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:32.731174 master-0 kubenswrapper[7385]: I0319 09:25:32.731096 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:32.731174 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:32.731174 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:32.731174 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:32.731174 master-0 kubenswrapper[7385]: I0319 09:25:32.731165 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:33.731766 master-0 kubenswrapper[7385]: I0319 09:25:33.731689 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:33.731766 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:33.731766 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:33.731766 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:33.732467 master-0 kubenswrapper[7385]: I0319 09:25:33.731798 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:34.731244 master-0 kubenswrapper[7385]: I0319 09:25:34.731127 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:34.731244 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:34.731244 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:34.731244 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:34.732746 master-0 kubenswrapper[7385]: I0319 09:25:34.731242 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:35.732488 master-0 kubenswrapper[7385]: I0319 09:25:35.732409 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:35.732488 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:35.732488 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:35.732488 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:35.736051 master-0 kubenswrapper[7385]: I0319 09:25:35.732506 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:35.885117 master-0 kubenswrapper[7385]: I0319 09:25:35.885061 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/1.log" Mar 19 09:25:35.886339 master-0 kubenswrapper[7385]: I0319 09:25:35.886298 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/0.log" Mar 19 09:25:35.886405 master-0 kubenswrapper[7385]: I0319 09:25:35.886363 7385 generic.go:334] "Generic (PLEG): container finished" podID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" containerID="88c0357aa022be581857f74b9852a9a65a3b84fe610ed6b9bc79f94f9ef05744" exitCode=1 Mar 19 09:25:35.886439 master-0 kubenswrapper[7385]: I0319 09:25:35.886403 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerDied","Data":"88c0357aa022be581857f74b9852a9a65a3b84fe610ed6b9bc79f94f9ef05744"} Mar 19 09:25:35.886469 master-0 kubenswrapper[7385]: I0319 09:25:35.886447 7385 scope.go:117] "RemoveContainer" containerID="c8c2b59685bc30549de9bfd2d5a139e18ceba9f4c5c0b572b2cf26e45dd85e1b" Mar 19 09:25:35.887343 master-0 kubenswrapper[7385]: I0319 09:25:35.887305 7385 scope.go:117] "RemoveContainer" containerID="88c0357aa022be581857f74b9852a9a65a3b84fe610ed6b9bc79f94f9ef05744" Mar 19 09:25:35.888371 master-0 kubenswrapper[7385]: E0319 09:25:35.887851 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:25:36.730831 master-0 kubenswrapper[7385]: I0319 09:25:36.730684 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:36.730831 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:36.730831 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:36.730831 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:36.730831 master-0 kubenswrapper[7385]: I0319 09:25:36.730753 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:36.907793 master-0 kubenswrapper[7385]: I0319 09:25:36.907720 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/1.log" Mar 19 09:25:37.731412 master-0 kubenswrapper[7385]: I0319 09:25:37.731329 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:37.731412 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:37.731412 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:37.731412 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:37.731756 master-0 kubenswrapper[7385]: I0319 09:25:37.731432 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:38.731504 master-0 kubenswrapper[7385]: I0319 09:25:38.731435 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:38.731504 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:38.731504 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:38.731504 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:38.732260 master-0 kubenswrapper[7385]: I0319 09:25:38.731531 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:39.731155 master-0 kubenswrapper[7385]: I0319 09:25:39.731078 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:39.731155 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:39.731155 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:39.731155 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:39.731155 master-0 kubenswrapper[7385]: I0319 09:25:39.731143 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:40.731070 master-0 kubenswrapper[7385]: I0319 09:25:40.731017 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:40.731070 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:40.731070 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:40.731070 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:40.731458 master-0 kubenswrapper[7385]: I0319 09:25:40.731096 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:41.729820 master-0 kubenswrapper[7385]: I0319 09:25:41.729755 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:41.729820 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:41.729820 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:41.729820 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:41.730368 master-0 kubenswrapper[7385]: I0319 09:25:41.729830 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:42.730057 master-0 kubenswrapper[7385]: I0319 09:25:42.729993 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:42.730057 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:42.730057 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:42.730057 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:42.730057 master-0 kubenswrapper[7385]: I0319 09:25:42.730050 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:43.731467 master-0 kubenswrapper[7385]: I0319 09:25:43.731378 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:43.731467 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:43.731467 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:43.731467 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:43.732094 master-0 kubenswrapper[7385]: I0319 09:25:43.731488 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:44.730328 master-0 kubenswrapper[7385]: I0319 09:25:44.730212 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:44.730328 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:44.730328 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:44.730328 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:44.730328 master-0 kubenswrapper[7385]: I0319 09:25:44.730320 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:45.730889 master-0 kubenswrapper[7385]: I0319 09:25:45.730798 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:45.730889 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:45.730889 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:45.730889 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:45.730889 master-0 kubenswrapper[7385]: I0319 09:25:45.730883 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:46.730636 master-0 kubenswrapper[7385]: I0319 09:25:46.730582 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:46.730636 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:46.730636 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:46.730636 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:46.731527 master-0 kubenswrapper[7385]: I0319 09:25:46.730642 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:47.530383 master-0 kubenswrapper[7385]: I0319 09:25:47.530287 7385 scope.go:117] "RemoveContainer" containerID="88c0357aa022be581857f74b9852a9a65a3b84fe610ed6b9bc79f94f9ef05744" Mar 19 09:25:47.730875 master-0 kubenswrapper[7385]: I0319 09:25:47.730815 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:47.730875 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:47.730875 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:47.730875 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:47.732650 master-0 kubenswrapper[7385]: I0319 09:25:47.730901 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:47.999972 master-0 kubenswrapper[7385]: I0319 09:25:47.999942 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/1.log" Mar 19 09:25:48.000634 master-0 kubenswrapper[7385]: I0319 09:25:48.000586 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"0a826efef4d4285208df9ac62804747687dd3c66bd7c0716a36851e3ff4bbfd4"} Mar 19 09:25:48.730142 master-0 kubenswrapper[7385]: I0319 09:25:48.730074 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:48.730142 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:48.730142 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:48.730142 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:48.730142 master-0 kubenswrapper[7385]: I0319 09:25:48.730136 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:49.730923 master-0 kubenswrapper[7385]: I0319 09:25:49.730828 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:49.730923 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:49.730923 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:49.730923 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:49.731478 master-0 kubenswrapper[7385]: I0319 09:25:49.730933 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:50.732400 master-0 kubenswrapper[7385]: I0319 09:25:50.732287 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:50.732400 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:50.732400 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:50.732400 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:50.732400 master-0 kubenswrapper[7385]: I0319 09:25:50.732387 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:51.732867 master-0 kubenswrapper[7385]: I0319 09:25:51.732478 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:51.732867 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:51.732867 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:51.732867 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:51.732867 master-0 kubenswrapper[7385]: I0319 09:25:51.732610 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:52.731518 master-0 kubenswrapper[7385]: I0319 09:25:52.731410 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:52.731518 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:52.731518 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:52.731518 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:52.731518 master-0 kubenswrapper[7385]: I0319 09:25:52.731475 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:53.732312 master-0 kubenswrapper[7385]: I0319 09:25:53.731956 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:53.732312 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:53.732312 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:53.732312 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:53.732312 master-0 kubenswrapper[7385]: I0319 09:25:53.732064 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:54.731158 master-0 kubenswrapper[7385]: I0319 09:25:54.731099 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:54.731158 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:54.731158 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:54.731158 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:54.731527 master-0 kubenswrapper[7385]: I0319 09:25:54.731185 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:55.731165 master-0 kubenswrapper[7385]: I0319 09:25:55.731105 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:55.731165 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:55.731165 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:55.731165 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:55.731971 master-0 kubenswrapper[7385]: I0319 09:25:55.731172 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:56.731703 master-0 kubenswrapper[7385]: I0319 09:25:56.731618 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:56.731703 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:56.731703 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:56.731703 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:56.732343 master-0 kubenswrapper[7385]: I0319 09:25:56.731719 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:57.731064 master-0 kubenswrapper[7385]: I0319 09:25:57.730991 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:57.731064 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:57.731064 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:57.731064 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:57.731064 master-0 kubenswrapper[7385]: I0319 09:25:57.731061 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:58.731779 master-0 kubenswrapper[7385]: I0319 09:25:58.731672 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:58.731779 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:58.731779 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:58.731779 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:58.732648 master-0 kubenswrapper[7385]: I0319 09:25:58.731773 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:25:59.747008 master-0 kubenswrapper[7385]: I0319 09:25:59.746921 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:25:59.747008 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:25:59.747008 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:25:59.747008 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:25:59.747008 master-0 kubenswrapper[7385]: I0319 09:25:59.747002 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:00.730607 master-0 kubenswrapper[7385]: I0319 09:26:00.730511 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:00.730607 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:00.730607 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:00.730607 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:00.730607 master-0 kubenswrapper[7385]: I0319 09:26:00.730603 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:01.730834 master-0 kubenswrapper[7385]: I0319 09:26:01.730785 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:01.730834 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:01.730834 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:01.730834 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:01.731485 master-0 kubenswrapper[7385]: I0319 09:26:01.731451 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:02.731030 master-0 kubenswrapper[7385]: I0319 09:26:02.730964 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:02.731030 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:02.731030 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:02.731030 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:02.732137 master-0 kubenswrapper[7385]: I0319 09:26:02.731713 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:03.731116 master-0 kubenswrapper[7385]: I0319 09:26:03.731035 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:03.731116 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:03.731116 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:03.731116 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:03.731884 master-0 kubenswrapper[7385]: I0319 09:26:03.731149 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:04.731359 master-0 kubenswrapper[7385]: I0319 09:26:04.731299 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:04.731359 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:04.731359 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:04.731359 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:04.732299 master-0 kubenswrapper[7385]: I0319 09:26:04.731375 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:05.730914 master-0 kubenswrapper[7385]: I0319 09:26:05.730827 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:05.730914 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:05.730914 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:05.730914 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:05.731227 master-0 kubenswrapper[7385]: I0319 09:26:05.730927 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:06.731396 master-0 kubenswrapper[7385]: I0319 09:26:06.731304 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:06.731396 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:06.731396 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:06.731396 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:06.731396 master-0 kubenswrapper[7385]: I0319 09:26:06.731396 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:07.731626 master-0 kubenswrapper[7385]: I0319 09:26:07.731509 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:07.731626 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:07.731626 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:07.731626 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:07.733185 master-0 kubenswrapper[7385]: I0319 09:26:07.731657 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:08.731230 master-0 kubenswrapper[7385]: I0319 09:26:08.731082 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:08.731230 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:08.731230 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:08.731230 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:08.732444 master-0 kubenswrapper[7385]: I0319 09:26:08.731701 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:09.731603 master-0 kubenswrapper[7385]: I0319 09:26:09.731521 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:09.731603 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:09.731603 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:09.731603 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:09.732686 master-0 kubenswrapper[7385]: I0319 09:26:09.731623 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:10.730427 master-0 kubenswrapper[7385]: I0319 09:26:10.730367 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:10.730427 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:10.730427 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:10.730427 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:10.730870 master-0 kubenswrapper[7385]: I0319 09:26:10.730436 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:11.730941 master-0 kubenswrapper[7385]: I0319 09:26:11.730874 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:11.730941 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:11.730941 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:11.730941 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:11.730941 master-0 kubenswrapper[7385]: I0319 09:26:11.730936 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:12.731465 master-0 kubenswrapper[7385]: I0319 09:26:12.731395 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:12.731465 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:12.731465 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:12.731465 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:12.732048 master-0 kubenswrapper[7385]: I0319 09:26:12.731491 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:13.731381 master-0 kubenswrapper[7385]: I0319 09:26:13.731323 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:13.731381 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:13.731381 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:13.731381 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:13.731981 master-0 kubenswrapper[7385]: I0319 09:26:13.731385 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:14.730625 master-0 kubenswrapper[7385]: I0319 09:26:14.730524 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:14.730625 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:14.730625 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:14.730625 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:14.730971 master-0 kubenswrapper[7385]: I0319 09:26:14.730671 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:15.731354 master-0 kubenswrapper[7385]: I0319 09:26:15.731291 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:15.731354 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:15.731354 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:15.731354 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:15.732353 master-0 kubenswrapper[7385]: I0319 09:26:15.731357 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:16.731927 master-0 kubenswrapper[7385]: I0319 09:26:16.731833 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:16.731927 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:16.731927 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:16.731927 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:16.733061 master-0 kubenswrapper[7385]: I0319 09:26:16.731954 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:17.732016 master-0 kubenswrapper[7385]: I0319 09:26:17.731944 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:17.732016 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:17.732016 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:17.732016 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:17.732716 master-0 kubenswrapper[7385]: I0319 09:26:17.732021 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:18.682885 master-0 kubenswrapper[7385]: I0319 09:26:18.682777 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 09:26:18.683940 master-0 kubenswrapper[7385]: I0319 09:26:18.683899 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.688824 master-0 kubenswrapper[7385]: I0319 09:26:18.688070 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 09:26:18.688824 master-0 kubenswrapper[7385]: I0319 09:26:18.688114 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cnb44" Mar 19 09:26:18.694535 master-0 kubenswrapper[7385]: I0319 09:26:18.694492 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 09:26:18.730823 master-0 kubenswrapper[7385]: I0319 09:26:18.730775 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:18.730823 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:18.730823 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:18.730823 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:18.730823 master-0 kubenswrapper[7385]: I0319 09:26:18.730824 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:18.779564 master-0 kubenswrapper[7385]: I0319 09:26:18.779421 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc694742-6772-4a90-9f10-505d9e2eec3d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.780688 master-0 kubenswrapper[7385]: I0319 09:26:18.779654 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.780688 master-0 kubenswrapper[7385]: I0319 09:26:18.779914 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-var-lock\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.881696 master-0 kubenswrapper[7385]: I0319 09:26:18.881062 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.881696 master-0 kubenswrapper[7385]: I0319 09:26:18.881156 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-var-lock\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.881696 master-0 kubenswrapper[7385]: I0319 09:26:18.881209 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc694742-6772-4a90-9f10-505d9e2eec3d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.881696 master-0 kubenswrapper[7385]: I0319 09:26:18.881200 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.881696 master-0 kubenswrapper[7385]: I0319 09:26:18.881463 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-var-lock\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:18.901957 master-0 kubenswrapper[7385]: I0319 09:26:18.901884 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc694742-6772-4a90-9f10-505d9e2eec3d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:19.017729 master-0 kubenswrapper[7385]: I0319 09:26:19.017516 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:19.488096 master-0 kubenswrapper[7385]: I0319 09:26:19.488045 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 09:26:19.493166 master-0 kubenswrapper[7385]: W0319 09:26:19.493103 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfc694742_6772_4a90_9f10_505d9e2eec3d.slice/crio-55582e6903695241dd045647626133d5ac058b7caf84dd3ae435c1d21ca5d72d WatchSource:0}: Error finding container 55582e6903695241dd045647626133d5ac058b7caf84dd3ae435c1d21ca5d72d: Status 404 returned error can't find the container with id 55582e6903695241dd045647626133d5ac058b7caf84dd3ae435c1d21ca5d72d Mar 19 09:26:19.730884 master-0 kubenswrapper[7385]: I0319 09:26:19.730819 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:19.730884 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:19.730884 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:19.730884 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:19.731285 master-0 kubenswrapper[7385]: I0319 09:26:19.730904 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:20.249302 master-0 kubenswrapper[7385]: I0319 09:26:20.249150 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"fc694742-6772-4a90-9f10-505d9e2eec3d","Type":"ContainerStarted","Data":"6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570"} Mar 19 09:26:20.249302 master-0 kubenswrapper[7385]: I0319 09:26:20.249205 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"fc694742-6772-4a90-9f10-505d9e2eec3d","Type":"ContainerStarted","Data":"55582e6903695241dd045647626133d5ac058b7caf84dd3ae435c1d21ca5d72d"} Mar 19 09:26:20.273257 master-0 kubenswrapper[7385]: I0319 09:26:20.270578 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.270530598 podStartE2EDuration="2.270530598s" podCreationTimestamp="2026-03-19 09:26:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:26:20.268829562 +0000 UTC m=+475.943259263" watchObservedRunningTime="2026-03-19 09:26:20.270530598 +0000 UTC m=+475.944960319" Mar 19 09:26:20.730826 master-0 kubenswrapper[7385]: I0319 09:26:20.730741 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:20.730826 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:20.730826 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:20.730826 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:20.731119 master-0 kubenswrapper[7385]: I0319 09:26:20.730838 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:21.730914 master-0 kubenswrapper[7385]: I0319 09:26:21.730845 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:21.730914 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:21.730914 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:21.730914 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:21.731638 master-0 kubenswrapper[7385]: I0319 09:26:21.730915 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:22.731641 master-0 kubenswrapper[7385]: I0319 09:26:22.731589 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:22.731641 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:22.731641 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:22.731641 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:22.732711 master-0 kubenswrapper[7385]: I0319 09:26:22.731649 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:23.730150 master-0 kubenswrapper[7385]: I0319 09:26:23.730090 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:23.730150 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:23.730150 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:23.730150 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:23.730150 master-0 kubenswrapper[7385]: I0319 09:26:23.730147 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:24.732351 master-0 kubenswrapper[7385]: I0319 09:26:24.731986 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:24.732351 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:24.732351 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:24.732351 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:24.732351 master-0 kubenswrapper[7385]: I0319 09:26:24.732102 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:25.731753 master-0 kubenswrapper[7385]: I0319 09:26:25.731678 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:25.731753 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:25.731753 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:25.731753 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:25.732173 master-0 kubenswrapper[7385]: I0319 09:26:25.731783 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:26.731270 master-0 kubenswrapper[7385]: I0319 09:26:26.731199 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:26.731270 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:26.731270 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:26.731270 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:26.732089 master-0 kubenswrapper[7385]: I0319 09:26:26.732054 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:27.731472 master-0 kubenswrapper[7385]: I0319 09:26:27.731429 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:27.731472 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:27.731472 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:27.731472 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:27.731472 master-0 kubenswrapper[7385]: I0319 09:26:27.731477 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:28.730403 master-0 kubenswrapper[7385]: I0319 09:26:28.730336 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:28.730403 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:28.730403 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:28.730403 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:28.730777 master-0 kubenswrapper[7385]: I0319 09:26:28.730416 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:29.731360 master-0 kubenswrapper[7385]: I0319 09:26:29.731274 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:29.731360 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:29.731360 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:29.731360 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:29.731360 master-0 kubenswrapper[7385]: I0319 09:26:29.731344 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:30.730287 master-0 kubenswrapper[7385]: I0319 09:26:30.730224 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:30.730287 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:30.730287 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:30.730287 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:30.730287 master-0 kubenswrapper[7385]: I0319 09:26:30.730283 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:31.730474 master-0 kubenswrapper[7385]: I0319 09:26:31.730394 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:31.730474 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:31.730474 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:31.730474 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:31.730474 master-0 kubenswrapper[7385]: I0319 09:26:31.730449 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:32.730656 master-0 kubenswrapper[7385]: I0319 09:26:32.730511 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:32.730656 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:32.730656 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:32.730656 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:32.731281 master-0 kubenswrapper[7385]: I0319 09:26:32.731234 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:33.278876 master-0 kubenswrapper[7385]: I0319 09:26:33.278796 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 09:26:33.279114 master-0 kubenswrapper[7385]: I0319 09:26:33.279075 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="fc694742-6772-4a90-9f10-505d9e2eec3d" containerName="installer" containerID="cri-o://6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570" gracePeriod=30 Mar 19 09:26:33.668675 master-0 kubenswrapper[7385]: I0319 09:26:33.668624 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-nbj5j"] Mar 19 09:26:33.669365 master-0 kubenswrapper[7385]: I0319 09:26:33.669346 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.671368 master-0 kubenswrapper[7385]: I0319 09:26:33.671328 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-ptq7p" Mar 19 09:26:33.671619 master-0 kubenswrapper[7385]: I0319 09:26:33.671587 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 19 09:26:33.733939 master-0 kubenswrapper[7385]: I0319 09:26:33.733884 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:33.733939 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:33.733939 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:33.733939 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:33.734527 master-0 kubenswrapper[7385]: I0319 09:26:33.733944 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:33.809122 master-0 kubenswrapper[7385]: I0319 09:26:33.808998 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5770097-96ec-4b21-ac96-26bf027850bf-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.809332 master-0 kubenswrapper[7385]: I0319 09:26:33.809141 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5770097-96ec-4b21-ac96-26bf027850bf-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.809332 master-0 kubenswrapper[7385]: I0319 09:26:33.809210 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxgp\" (UniqueName: \"kubernetes.io/projected/b5770097-96ec-4b21-ac96-26bf027850bf-kube-api-access-grxgp\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.809332 master-0 kubenswrapper[7385]: I0319 09:26:33.809314 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b5770097-96ec-4b21-ac96-26bf027850bf-ready\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.910418 master-0 kubenswrapper[7385]: I0319 09:26:33.910371 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5770097-96ec-4b21-ac96-26bf027850bf-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.910863 master-0 kubenswrapper[7385]: I0319 09:26:33.910845 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5770097-96ec-4b21-ac96-26bf027850bf-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.910991 master-0 kubenswrapper[7385]: I0319 09:26:33.910968 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grxgp\" (UniqueName: \"kubernetes.io/projected/b5770097-96ec-4b21-ac96-26bf027850bf-kube-api-access-grxgp\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.911180 master-0 kubenswrapper[7385]: I0319 09:26:33.911158 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b5770097-96ec-4b21-ac96-26bf027850bf-ready\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.911625 master-0 kubenswrapper[7385]: I0319 09:26:33.911596 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5770097-96ec-4b21-ac96-26bf027850bf-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.911701 master-0 kubenswrapper[7385]: I0319 09:26:33.911618 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b5770097-96ec-4b21-ac96-26bf027850bf-ready\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.911701 master-0 kubenswrapper[7385]: I0319 09:26:33.910565 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5770097-96ec-4b21-ac96-26bf027850bf-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.928098 master-0 kubenswrapper[7385]: I0319 09:26:33.927976 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grxgp\" (UniqueName: \"kubernetes.io/projected/b5770097-96ec-4b21-ac96-26bf027850bf-kube-api-access-grxgp\") pod \"cni-sysctl-allowlist-ds-nbj5j\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:33.988018 master-0 kubenswrapper[7385]: I0319 09:26:33.987950 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:34.005928 master-0 kubenswrapper[7385]: W0319 09:26:34.005876 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5770097_96ec_4b21_ac96_26bf027850bf.slice/crio-0055cb4c6c131d6dc20d04d7968a8349f3a6e11d95d2bd7dcfb09f401f66d5f9 WatchSource:0}: Error finding container 0055cb4c6c131d6dc20d04d7968a8349f3a6e11d95d2bd7dcfb09f401f66d5f9: Status 404 returned error can't find the container with id 0055cb4c6c131d6dc20d04d7968a8349f3a6e11d95d2bd7dcfb09f401f66d5f9 Mar 19 09:26:34.328210 master-0 kubenswrapper[7385]: I0319 09:26:34.328138 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" event={"ID":"b5770097-96ec-4b21-ac96-26bf027850bf","Type":"ContainerStarted","Data":"04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4"} Mar 19 09:26:34.328210 master-0 kubenswrapper[7385]: I0319 09:26:34.328185 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" event={"ID":"b5770097-96ec-4b21-ac96-26bf027850bf","Type":"ContainerStarted","Data":"0055cb4c6c131d6dc20d04d7968a8349f3a6e11d95d2bd7dcfb09f401f66d5f9"} Mar 19 09:26:34.328434 master-0 kubenswrapper[7385]: I0319 09:26:34.328374 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:34.730444 master-0 kubenswrapper[7385]: I0319 09:26:34.730396 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:34.730444 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:34.730444 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:34.730444 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:34.730855 master-0 kubenswrapper[7385]: I0319 09:26:34.730462 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:35.353188 master-0 kubenswrapper[7385]: I0319 09:26:35.353153 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:26:35.383516 master-0 kubenswrapper[7385]: I0319 09:26:35.383432 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" podStartSLOduration=2.383413994 podStartE2EDuration="2.383413994s" podCreationTimestamp="2026-03-19 09:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:26:34.384305495 +0000 UTC m=+490.058735196" watchObservedRunningTime="2026-03-19 09:26:35.383413994 +0000 UTC m=+491.057843695" Mar 19 09:26:35.634617 master-0 kubenswrapper[7385]: I0319 09:26:35.634462 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-nbj5j"] Mar 19 09:26:35.730574 master-0 kubenswrapper[7385]: I0319 09:26:35.730502 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:35.730574 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:35.730574 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:35.730574 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:35.730829 master-0 kubenswrapper[7385]: I0319 09:26:35.730601 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:36.075979 master-0 kubenswrapper[7385]: I0319 09:26:36.075854 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 09:26:36.076668 master-0 kubenswrapper[7385]: I0319 09:26:36.076644 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.091751 master-0 kubenswrapper[7385]: I0319 09:26:36.091713 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 09:26:36.239646 master-0 kubenswrapper[7385]: I0319 09:26:36.239573 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-var-lock\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.239857 master-0 kubenswrapper[7385]: I0319 09:26:36.239688 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kube-api-access\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.239857 master-0 kubenswrapper[7385]: I0319 09:26:36.239733 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.341119 master-0 kubenswrapper[7385]: I0319 09:26:36.340988 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kube-api-access\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.341119 master-0 kubenswrapper[7385]: I0319 09:26:36.341065 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.341342 master-0 kubenswrapper[7385]: I0319 09:26:36.341155 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-var-lock\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.341342 master-0 kubenswrapper[7385]: I0319 09:26:36.341206 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.341342 master-0 kubenswrapper[7385]: I0319 09:26:36.341252 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-var-lock\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.355322 master-0 kubenswrapper[7385]: I0319 09:26:36.355287 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kube-api-access\") pod \"installer-3-master-0\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.395598 master-0 kubenswrapper[7385]: I0319 09:26:36.395521 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:36.730982 master-0 kubenswrapper[7385]: I0319 09:26:36.730912 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:36.730982 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:36.730982 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:36.730982 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:36.731250 master-0 kubenswrapper[7385]: I0319 09:26:36.730985 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:36.776336 master-0 kubenswrapper[7385]: I0319 09:26:36.776287 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 09:26:36.784733 master-0 kubenswrapper[7385]: W0319 09:26:36.784686 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddc01ae64_4c0e_4e4e_95fc_2e0a8cdc79ff.slice/crio-8e718fb4862222c8d61b85888f2f5b0e6b6e3b234564b1bf0103d7c9fff5ed9b WatchSource:0}: Error finding container 8e718fb4862222c8d61b85888f2f5b0e6b6e3b234564b1bf0103d7c9fff5ed9b: Status 404 returned error can't find the container with id 8e718fb4862222c8d61b85888f2f5b0e6b6e3b234564b1bf0103d7c9fff5ed9b Mar 19 09:26:37.345737 master-0 kubenswrapper[7385]: I0319 09:26:37.345684 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff","Type":"ContainerStarted","Data":"7e5369bb012f61110632c73803fab63159503b351960a03db1d17bbc38032f93"} Mar 19 09:26:37.345737 master-0 kubenswrapper[7385]: I0319 09:26:37.345735 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff","Type":"ContainerStarted","Data":"8e718fb4862222c8d61b85888f2f5b0e6b6e3b234564b1bf0103d7c9fff5ed9b"} Mar 19 09:26:37.345949 master-0 kubenswrapper[7385]: I0319 09:26:37.345788 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" gracePeriod=30 Mar 19 09:26:37.364870 master-0 kubenswrapper[7385]: I0319 09:26:37.364805 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.3647903000000001 podStartE2EDuration="1.3647903s" podCreationTimestamp="2026-03-19 09:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:26:37.361693787 +0000 UTC m=+493.036123488" watchObservedRunningTime="2026-03-19 09:26:37.3647903 +0000 UTC m=+493.039220001" Mar 19 09:26:37.731833 master-0 kubenswrapper[7385]: I0319 09:26:37.731678 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:37.731833 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:37.731833 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:37.731833 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:37.731833 master-0 kubenswrapper[7385]: I0319 09:26:37.731771 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:38.730016 master-0 kubenswrapper[7385]: I0319 09:26:38.729935 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:38.730016 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:38.730016 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:38.730016 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:38.730516 master-0 kubenswrapper[7385]: I0319 09:26:38.730028 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:39.731320 master-0 kubenswrapper[7385]: I0319 09:26:39.731248 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:39.731320 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:39.731320 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:39.731320 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:39.731320 master-0 kubenswrapper[7385]: I0319 09:26:39.731314 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:40.730660 master-0 kubenswrapper[7385]: I0319 09:26:40.730519 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:40.730660 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:40.730660 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:40.730660 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:40.731113 master-0 kubenswrapper[7385]: I0319 09:26:40.730669 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:40.886783 master-0 kubenswrapper[7385]: I0319 09:26:40.886669 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-8699f95c5b-7w9vq"] Mar 19 09:26:40.889286 master-0 kubenswrapper[7385]: I0319 09:26:40.889232 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:40.891312 master-0 kubenswrapper[7385]: I0319 09:26:40.891265 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-2hrp4" Mar 19 09:26:40.892496 master-0 kubenswrapper[7385]: I0319 09:26:40.892463 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 19 09:26:40.894329 master-0 kubenswrapper[7385]: I0319 09:26:40.894286 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 19 09:26:40.902206 master-0 kubenswrapper[7385]: I0319 09:26:40.902161 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 19 09:26:40.902476 master-0 kubenswrapper[7385]: I0319 09:26:40.902380 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 19 09:26:40.903170 master-0 kubenswrapper[7385]: I0319 09:26:40.903144 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 19 09:26:40.908341 master-0 kubenswrapper[7385]: I0319 09:26:40.908307 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 19 09:26:40.911172 master-0 kubenswrapper[7385]: I0319 09:26:40.911134 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-8699f95c5b-7w9vq"] Mar 19 09:26:41.006128 master-0 kubenswrapper[7385]: I0319 09:26:41.005995 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.006128 master-0 kubenswrapper[7385]: I0319 09:26:41.006067 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.006128 master-0 kubenswrapper[7385]: I0319 09:26:41.006095 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.006128 master-0 kubenswrapper[7385]: I0319 09:26:41.006123 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgrjz\" (UniqueName: \"kubernetes.io/projected/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-kube-api-access-lgrjz\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.006387 master-0 kubenswrapper[7385]: I0319 09:26:41.006164 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.006387 master-0 kubenswrapper[7385]: I0319 09:26:41.006198 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.006387 master-0 kubenswrapper[7385]: I0319 09:26:41.006227 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.006387 master-0 kubenswrapper[7385]: I0319 09:26:41.006271 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.107443 master-0 kubenswrapper[7385]: I0319 09:26:41.107384 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.107681 master-0 kubenswrapper[7385]: I0319 09:26:41.107506 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.107681 master-0 kubenswrapper[7385]: I0319 09:26:41.107618 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.107681 master-0 kubenswrapper[7385]: I0319 09:26:41.107652 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.107681 master-0 kubenswrapper[7385]: I0319 09:26:41.107676 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgrjz\" (UniqueName: \"kubernetes.io/projected/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-kube-api-access-lgrjz\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.107855 master-0 kubenswrapper[7385]: I0319 09:26:41.107714 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.107967 master-0 kubenswrapper[7385]: I0319 09:26:41.107924 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.108026 master-0 kubenswrapper[7385]: I0319 09:26:41.108008 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.108919 master-0 kubenswrapper[7385]: I0319 09:26:41.108877 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.110645 master-0 kubenswrapper[7385]: I0319 09:26:41.110608 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.111243 master-0 kubenswrapper[7385]: I0319 09:26:41.111138 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.112189 master-0 kubenswrapper[7385]: I0319 09:26:41.112158 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.112395 master-0 kubenswrapper[7385]: I0319 09:26:41.112217 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.117595 master-0 kubenswrapper[7385]: I0319 09:26:41.113207 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.117595 master-0 kubenswrapper[7385]: I0319 09:26:41.113473 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.143125 master-0 kubenswrapper[7385]: I0319 09:26:41.143061 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgrjz\" (UniqueName: \"kubernetes.io/projected/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-kube-api-access-lgrjz\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.210904 master-0 kubenswrapper[7385]: I0319 09:26:41.210855 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:26:41.631169 master-0 kubenswrapper[7385]: I0319 09:26:41.631121 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-8699f95c5b-7w9vq"] Mar 19 09:26:41.634842 master-0 kubenswrapper[7385]: W0319 09:26:41.634746 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd80f71af_e3ff_4a9f_8c9c_883a6a5581d0.slice/crio-58fb20f0efe35396beaa43bc3d7cc4b5db2f0e64b1edfa9263cafc7641e2c772 WatchSource:0}: Error finding container 58fb20f0efe35396beaa43bc3d7cc4b5db2f0e64b1edfa9263cafc7641e2c772: Status 404 returned error can't find the container with id 58fb20f0efe35396beaa43bc3d7cc4b5db2f0e64b1edfa9263cafc7641e2c772 Mar 19 09:26:41.637098 master-0 kubenswrapper[7385]: I0319 09:26:41.637052 7385 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 09:26:41.730340 master-0 kubenswrapper[7385]: I0319 09:26:41.730277 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:41.730340 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:41.730340 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:41.730340 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:41.730662 master-0 kubenswrapper[7385]: I0319 09:26:41.730347 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:42.373167 master-0 kubenswrapper[7385]: I0319 09:26:42.373117 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" event={"ID":"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0","Type":"ContainerStarted","Data":"58fb20f0efe35396beaa43bc3d7cc4b5db2f0e64b1edfa9263cafc7641e2c772"} Mar 19 09:26:42.731483 master-0 kubenswrapper[7385]: I0319 09:26:42.731373 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:42.731483 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:42.731483 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:42.731483 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:42.731483 master-0 kubenswrapper[7385]: I0319 09:26:42.731433 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:43.276063 master-0 kubenswrapper[7385]: I0319 09:26:43.272212 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-69cqn"] Mar 19 09:26:43.276063 master-0 kubenswrapper[7385]: I0319 09:26:43.273283 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.280027 master-0 kubenswrapper[7385]: I0319 09:26:43.279902 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-cpq7s" Mar 19 09:26:43.283672 master-0 kubenswrapper[7385]: I0319 09:26:43.283619 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-69cqn"] Mar 19 09:26:43.347085 master-0 kubenswrapper[7385]: I0319 09:26:43.347019 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtj5f\" (UniqueName: \"kubernetes.io/projected/4a73a5b0-478f-496d-8b0c-9e3daf39c082-kube-api-access-qtj5f\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.347280 master-0 kubenswrapper[7385]: I0319 09:26:43.347117 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.448713 master-0 kubenswrapper[7385]: I0319 09:26:43.448365 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtj5f\" (UniqueName: \"kubernetes.io/projected/4a73a5b0-478f-496d-8b0c-9e3daf39c082-kube-api-access-qtj5f\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.448713 master-0 kubenswrapper[7385]: I0319 09:26:43.448431 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.451733 master-0 kubenswrapper[7385]: I0319 09:26:43.451694 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.465185 master-0 kubenswrapper[7385]: I0319 09:26:43.465140 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtj5f\" (UniqueName: \"kubernetes.io/projected/4a73a5b0-478f-496d-8b0c-9e3daf39c082-kube-api-access-qtj5f\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.599603 master-0 kubenswrapper[7385]: I0319 09:26:43.599525 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:26:43.731525 master-0 kubenswrapper[7385]: I0319 09:26:43.731445 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:43.731525 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:43.731525 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:43.731525 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:43.731843 master-0 kubenswrapper[7385]: I0319 09:26:43.731535 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:43.990776 master-0 kubenswrapper[7385]: E0319 09:26:43.990410 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:26:43.991709 master-0 kubenswrapper[7385]: E0319 09:26:43.991646 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:26:43.995868 master-0 kubenswrapper[7385]: E0319 09:26:43.992717 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:26:43.995868 master-0 kubenswrapper[7385]: E0319 09:26:43.992747 7385 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" containerName="kube-multus-additional-cni-plugins" Mar 19 09:26:44.386020 master-0 kubenswrapper[7385]: I0319 09:26:44.385926 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" event={"ID":"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0","Type":"ContainerStarted","Data":"c718bd5b4be98027af9c1889bd5e2262192d977468e1bd224d642504725b3155"} Mar 19 09:26:44.437115 master-0 kubenswrapper[7385]: I0319 09:26:44.437066 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-69cqn"] Mar 19 09:26:44.438171 master-0 kubenswrapper[7385]: W0319 09:26:44.438124 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a73a5b0_478f_496d_8b0c_9e3daf39c082.slice/crio-7bdc639c2478b5c195d66a7791ae65075a49456c359aa49e7fc420db2f85021a WatchSource:0}: Error finding container 7bdc639c2478b5c195d66a7791ae65075a49456c359aa49e7fc420db2f85021a: Status 404 returned error can't find the container with id 7bdc639c2478b5c195d66a7791ae65075a49456c359aa49e7fc420db2f85021a Mar 19 09:26:44.732338 master-0 kubenswrapper[7385]: I0319 09:26:44.732262 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:44.732338 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:44.732338 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:44.732338 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:44.732976 master-0 kubenswrapper[7385]: I0319 09:26:44.732372 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:45.400082 master-0 kubenswrapper[7385]: I0319 09:26:45.399555 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" event={"ID":"4a73a5b0-478f-496d-8b0c-9e3daf39c082","Type":"ContainerStarted","Data":"e598ac2a23a11eed8976c3e0e4048dc8c664a397bb13d57b7d82dd220b7e5d36"} Mar 19 09:26:45.400082 master-0 kubenswrapper[7385]: I0319 09:26:45.399604 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" event={"ID":"4a73a5b0-478f-496d-8b0c-9e3daf39c082","Type":"ContainerStarted","Data":"91bfa21a2eb69f63bfe36ef8b78186abad26820c4ec6b4e792fe4e2820000771"} Mar 19 09:26:45.400082 master-0 kubenswrapper[7385]: I0319 09:26:45.399614 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" event={"ID":"4a73a5b0-478f-496d-8b0c-9e3daf39c082","Type":"ContainerStarted","Data":"7bdc639c2478b5c195d66a7791ae65075a49456c359aa49e7fc420db2f85021a"} Mar 19 09:26:45.424997 master-0 kubenswrapper[7385]: I0319 09:26:45.424824 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" podStartSLOduration=2.424775275 podStartE2EDuration="2.424775275s" podCreationTimestamp="2026-03-19 09:26:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:26:45.424703713 +0000 UTC m=+501.099133434" watchObservedRunningTime="2026-03-19 09:26:45.424775275 +0000 UTC m=+501.099204996" Mar 19 09:26:45.479456 master-0 kubenswrapper[7385]: I0319 09:26:45.479405 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm"] Mar 19 09:26:45.479692 master-0 kubenswrapper[7385]: I0319 09:26:45.479642 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="multus-admission-controller" containerID="cri-o://878e0d63701a1caf794ebb2ed5a4a759d206a20246066ad1acd5bdfd53aa835e" gracePeriod=30 Mar 19 09:26:45.479776 master-0 kubenswrapper[7385]: I0319 09:26:45.479748 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="kube-rbac-proxy" containerID="cri-o://a0def10435beba37cc4f2c51d6d95e5b8b0c440dcd92fc57f96ff4a342fc9bce" gracePeriod=30 Mar 19 09:26:45.730773 master-0 kubenswrapper[7385]: I0319 09:26:45.730721 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:45.730773 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:45.730773 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:45.730773 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:45.731068 master-0 kubenswrapper[7385]: I0319 09:26:45.731037 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:46.407121 master-0 kubenswrapper[7385]: I0319 09:26:46.407075 7385 generic.go:334] "Generic (PLEG): container finished" podID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerID="a0def10435beba37cc4f2c51d6d95e5b8b0c440dcd92fc57f96ff4a342fc9bce" exitCode=0 Mar 19 09:26:46.407673 master-0 kubenswrapper[7385]: I0319 09:26:46.407155 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" event={"ID":"3816f149-ddce-41c8-a540-fe866ee71c5e","Type":"ContainerDied","Data":"a0def10435beba37cc4f2c51d6d95e5b8b0c440dcd92fc57f96ff4a342fc9bce"} Mar 19 09:26:46.409206 master-0 kubenswrapper[7385]: I0319 09:26:46.409180 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" event={"ID":"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0","Type":"ContainerStarted","Data":"571c72829902ab0d0347ed3cbc9bceba672fc7dd62c625d654b2f743b0ac07df"} Mar 19 09:26:46.409291 master-0 kubenswrapper[7385]: I0319 09:26:46.409220 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" event={"ID":"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0","Type":"ContainerStarted","Data":"117f79923603278e6f7f3e2f8d9d32939d3de54011a729137134c93a53256094"} Mar 19 09:26:46.432758 master-0 kubenswrapper[7385]: I0319 09:26:46.432688 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" podStartSLOduration=2.361408346 podStartE2EDuration="6.43266892s" podCreationTimestamp="2026-03-19 09:26:40 +0000 UTC" firstStartedPulling="2026-03-19 09:26:41.636928918 +0000 UTC m=+497.311358619" lastFinishedPulling="2026-03-19 09:26:45.708189492 +0000 UTC m=+501.382619193" observedRunningTime="2026-03-19 09:26:46.428704593 +0000 UTC m=+502.103134304" watchObservedRunningTime="2026-03-19 09:26:46.43266892 +0000 UTC m=+502.107098631" Mar 19 09:26:46.730709 master-0 kubenswrapper[7385]: I0319 09:26:46.730603 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:46.730709 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:46.730709 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:46.730709 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:46.730709 master-0 kubenswrapper[7385]: I0319 09:26:46.730675 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:48.081876 master-0 kubenswrapper[7385]: I0319 09:26:48.081773 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:48.081876 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:48.081876 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:48.081876 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:48.082582 master-0 kubenswrapper[7385]: I0319 09:26:48.081945 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:48.166934 master-0 kubenswrapper[7385]: I0319 09:26:48.166874 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 09:26:48.167182 master-0 kubenswrapper[7385]: I0319 09:26:48.167131 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-3-master-0" podUID="dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" containerName="installer" containerID="cri-o://7e5369bb012f61110632c73803fab63159503b351960a03db1d17bbc38032f93" gracePeriod=30 Mar 19 09:26:48.730677 master-0 kubenswrapper[7385]: I0319 09:26:48.730594 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:48.730677 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:48.730677 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:48.730677 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:48.730677 master-0 kubenswrapper[7385]: I0319 09:26:48.730675 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:49.573477 master-0 kubenswrapper[7385]: I0319 09:26:49.573182 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff/installer/0.log" Mar 19 09:26:49.573477 master-0 kubenswrapper[7385]: I0319 09:26:49.573230 7385 generic.go:334] "Generic (PLEG): container finished" podID="dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" containerID="7e5369bb012f61110632c73803fab63159503b351960a03db1d17bbc38032f93" exitCode=1 Mar 19 09:26:49.573477 master-0 kubenswrapper[7385]: I0319 09:26:49.573274 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff","Type":"ContainerDied","Data":"7e5369bb012f61110632c73803fab63159503b351960a03db1d17bbc38032f93"} Mar 19 09:26:49.730329 master-0 kubenswrapper[7385]: I0319 09:26:49.730265 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:49.730329 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:49.730329 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:49.730329 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:49.730329 master-0 kubenswrapper[7385]: I0319 09:26:49.730319 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:49.845439 master-0 kubenswrapper[7385]: I0319 09:26:49.845386 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff/installer/0.log" Mar 19 09:26:49.845668 master-0 kubenswrapper[7385]: I0319 09:26:49.845472 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:49.881527 master-0 kubenswrapper[7385]: I0319 09:26:49.881475 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 19 09:26:49.881811 master-0 kubenswrapper[7385]: E0319 09:26:49.881784 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" containerName="installer" Mar 19 09:26:49.881811 master-0 kubenswrapper[7385]: I0319 09:26:49.881804 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" containerName="installer" Mar 19 09:26:49.881984 master-0 kubenswrapper[7385]: I0319 09:26:49.881956 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" containerName="installer" Mar 19 09:26:49.882498 master-0 kubenswrapper[7385]: I0319 09:26:49.882473 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:49.930493 master-0 kubenswrapper[7385]: I0319 09:26:49.930436 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/925be58b-a4e2-448b-afb4-4b4d689ae64c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:49.930767 master-0 kubenswrapper[7385]: I0319 09:26:49.930571 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:49.931027 master-0 kubenswrapper[7385]: I0319 09:26:49.930992 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-var-lock\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:49.933243 master-0 kubenswrapper[7385]: I0319 09:26:49.933203 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 19 09:26:50.032325 master-0 kubenswrapper[7385]: I0319 09:26:50.032188 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kube-api-access\") pod \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " Mar 19 09:26:50.032969 master-0 kubenswrapper[7385]: I0319 09:26:50.032650 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-var-lock\") pod \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " Mar 19 09:26:50.032969 master-0 kubenswrapper[7385]: I0319 09:26:50.032692 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kubelet-dir\") pod \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\" (UID: \"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff\") " Mar 19 09:26:50.032969 master-0 kubenswrapper[7385]: I0319 09:26:50.032738 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-var-lock" (OuterVolumeSpecName: "var-lock") pod "dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" (UID: "dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:26:50.032969 master-0 kubenswrapper[7385]: I0319 09:26:50.032867 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" (UID: "dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:26:50.032969 master-0 kubenswrapper[7385]: I0319 09:26:50.032882 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:50.033263 master-0 kubenswrapper[7385]: I0319 09:26:50.032999 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:50.033263 master-0 kubenswrapper[7385]: I0319 09:26:50.033048 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-var-lock\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:50.033263 master-0 kubenswrapper[7385]: I0319 09:26:50.033140 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/925be58b-a4e2-448b-afb4-4b4d689ae64c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:50.033263 master-0 kubenswrapper[7385]: I0319 09:26:50.033221 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:26:50.033263 master-0 kubenswrapper[7385]: I0319 09:26:50.033235 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:26:50.033263 master-0 kubenswrapper[7385]: I0319 09:26:50.033240 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-var-lock\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:50.035845 master-0 kubenswrapper[7385]: I0319 09:26:50.035781 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" (UID: "dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:26:50.049357 master-0 kubenswrapper[7385]: I0319 09:26:50.049323 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/925be58b-a4e2-448b-afb4-4b4d689ae64c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:50.134145 master-0 kubenswrapper[7385]: I0319 09:26:50.134106 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:26:50.208012 master-0 kubenswrapper[7385]: I0319 09:26:50.207895 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:26:50.590862 master-0 kubenswrapper[7385]: I0319 09:26:50.589960 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 19 09:26:50.590862 master-0 kubenswrapper[7385]: I0319 09:26:50.590078 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff/installer/0.log" Mar 19 09:26:50.590862 master-0 kubenswrapper[7385]: I0319 09:26:50.590213 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff","Type":"ContainerDied","Data":"8e718fb4862222c8d61b85888f2f5b0e6b6e3b234564b1bf0103d7c9fff5ed9b"} Mar 19 09:26:50.590862 master-0 kubenswrapper[7385]: I0319 09:26:50.590343 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 19 09:26:50.590862 master-0 kubenswrapper[7385]: I0319 09:26:50.590372 7385 scope.go:117] "RemoveContainer" containerID="7e5369bb012f61110632c73803fab63159503b351960a03db1d17bbc38032f93" Mar 19 09:26:50.686792 master-0 kubenswrapper[7385]: I0319 09:26:50.684720 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 09:26:50.689527 master-0 kubenswrapper[7385]: I0319 09:26:50.689476 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 19 09:26:50.735046 master-0 kubenswrapper[7385]: I0319 09:26:50.734953 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:50.735046 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:50.735046 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:50.735046 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:50.735259 master-0 kubenswrapper[7385]: I0319 09:26:50.735062 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:50.932617 master-0 kubenswrapper[7385]: I0319 09:26:50.932588 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_fc694742-6772-4a90-9f10-505d9e2eec3d/installer/0.log" Mar 19 09:26:50.932842 master-0 kubenswrapper[7385]: I0319 09:26:50.932649 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:50.944482 master-0 kubenswrapper[7385]: I0319 09:26:50.944280 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc694742-6772-4a90-9f10-505d9e2eec3d-kube-api-access\") pod \"fc694742-6772-4a90-9f10-505d9e2eec3d\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " Mar 19 09:26:50.944482 master-0 kubenswrapper[7385]: I0319 09:26:50.944370 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-var-lock\") pod \"fc694742-6772-4a90-9f10-505d9e2eec3d\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " Mar 19 09:26:50.944482 master-0 kubenswrapper[7385]: I0319 09:26:50.944400 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-kubelet-dir\") pod \"fc694742-6772-4a90-9f10-505d9e2eec3d\" (UID: \"fc694742-6772-4a90-9f10-505d9e2eec3d\") " Mar 19 09:26:50.944800 master-0 kubenswrapper[7385]: I0319 09:26:50.944488 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-var-lock" (OuterVolumeSpecName: "var-lock") pod "fc694742-6772-4a90-9f10-505d9e2eec3d" (UID: "fc694742-6772-4a90-9f10-505d9e2eec3d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:26:50.944800 master-0 kubenswrapper[7385]: I0319 09:26:50.944635 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc694742-6772-4a90-9f10-505d9e2eec3d" (UID: "fc694742-6772-4a90-9f10-505d9e2eec3d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:26:50.945671 master-0 kubenswrapper[7385]: I0319 09:26:50.944938 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:26:50.945671 master-0 kubenswrapper[7385]: I0319 09:26:50.944967 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc694742-6772-4a90-9f10-505d9e2eec3d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:26:50.954626 master-0 kubenswrapper[7385]: I0319 09:26:50.953092 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc694742-6772-4a90-9f10-505d9e2eec3d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc694742-6772-4a90-9f10-505d9e2eec3d" (UID: "fc694742-6772-4a90-9f10-505d9e2eec3d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:26:51.045604 master-0 kubenswrapper[7385]: I0319 09:26:51.045558 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc694742-6772-4a90-9f10-505d9e2eec3d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:26:51.601402 master-0 kubenswrapper[7385]: I0319 09:26:51.601334 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_fc694742-6772-4a90-9f10-505d9e2eec3d/installer/0.log" Mar 19 09:26:51.601917 master-0 kubenswrapper[7385]: I0319 09:26:51.601422 7385 generic.go:334] "Generic (PLEG): container finished" podID="fc694742-6772-4a90-9f10-505d9e2eec3d" containerID="6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570" exitCode=1 Mar 19 09:26:51.601917 master-0 kubenswrapper[7385]: I0319 09:26:51.601501 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 19 09:26:51.601917 master-0 kubenswrapper[7385]: I0319 09:26:51.601515 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"fc694742-6772-4a90-9f10-505d9e2eec3d","Type":"ContainerDied","Data":"6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570"} Mar 19 09:26:51.601917 master-0 kubenswrapper[7385]: I0319 09:26:51.601585 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"fc694742-6772-4a90-9f10-505d9e2eec3d","Type":"ContainerDied","Data":"55582e6903695241dd045647626133d5ac058b7caf84dd3ae435c1d21ca5d72d"} Mar 19 09:26:51.601917 master-0 kubenswrapper[7385]: I0319 09:26:51.601607 7385 scope.go:117] "RemoveContainer" containerID="6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570" Mar 19 09:26:51.603115 master-0 kubenswrapper[7385]: I0319 09:26:51.603082 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"925be58b-a4e2-448b-afb4-4b4d689ae64c","Type":"ContainerStarted","Data":"d34b15333e7215221eb3166bafa905cc720923c5b54182dc9d2d804528d9b642"} Mar 19 09:26:51.603180 master-0 kubenswrapper[7385]: I0319 09:26:51.603133 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"925be58b-a4e2-448b-afb4-4b4d689ae64c","Type":"ContainerStarted","Data":"2b815bb5a4f3237642901cf478d08543a7c45d3f20aa5aa587a69d0647d632b8"} Mar 19 09:26:51.621179 master-0 kubenswrapper[7385]: I0319 09:26:51.621145 7385 scope.go:117] "RemoveContainer" containerID="6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570" Mar 19 09:26:51.621909 master-0 kubenswrapper[7385]: E0319 09:26:51.621861 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570\": container with ID starting with 6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570 not found: ID does not exist" containerID="6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570" Mar 19 09:26:51.622021 master-0 kubenswrapper[7385]: I0319 09:26:51.621978 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570"} err="failed to get container status \"6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570\": rpc error: code = NotFound desc = could not find container \"6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570\": container with ID starting with 6497aaaae458b6f0fa711788e9a4b6b065416fba284d90a77d3fef20eaf50570 not found: ID does not exist" Mar 19 09:26:51.622734 master-0 kubenswrapper[7385]: I0319 09:26:51.622673 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.622662495 podStartE2EDuration="2.622662495s" podCreationTimestamp="2026-03-19 09:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:26:51.621425642 +0000 UTC m=+507.295855343" watchObservedRunningTime="2026-03-19 09:26:51.622662495 +0000 UTC m=+507.297092196" Mar 19 09:26:51.679649 master-0 kubenswrapper[7385]: I0319 09:26:51.679516 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 09:26:51.695832 master-0 kubenswrapper[7385]: I0319 09:26:51.695771 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 19 09:26:51.730649 master-0 kubenswrapper[7385]: I0319 09:26:51.730520 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:51.730649 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:51.730649 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:51.730649 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:51.730649 master-0 kubenswrapper[7385]: I0319 09:26:51.730580 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:52.539692 master-0 kubenswrapper[7385]: I0319 09:26:52.539616 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff" path="/var/lib/kubelet/pods/dc01ae64-4c0e-4e4e-95fc-2e0a8cdc79ff/volumes" Mar 19 09:26:52.540248 master-0 kubenswrapper[7385]: I0319 09:26:52.540214 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc694742-6772-4a90-9f10-505d9e2eec3d" path="/var/lib/kubelet/pods/fc694742-6772-4a90-9f10-505d9e2eec3d/volumes" Mar 19 09:26:52.731407 master-0 kubenswrapper[7385]: I0319 09:26:52.731326 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:52.731407 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:52.731407 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:52.731407 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:52.732068 master-0 kubenswrapper[7385]: I0319 09:26:52.731436 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:53.730270 master-0 kubenswrapper[7385]: I0319 09:26:53.730207 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:53.730270 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:53.730270 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:53.730270 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:53.730270 master-0 kubenswrapper[7385]: I0319 09:26:53.730269 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:53.990532 master-0 kubenswrapper[7385]: E0319 09:26:53.990412 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:26:53.992077 master-0 kubenswrapper[7385]: E0319 09:26:53.991909 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:26:53.993377 master-0 kubenswrapper[7385]: E0319 09:26:53.993304 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:26:53.993377 master-0 kubenswrapper[7385]: E0319 09:26:53.993346 7385 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" containerName="kube-multus-additional-cni-plugins" Mar 19 09:26:54.730114 master-0 kubenswrapper[7385]: I0319 09:26:54.730042 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:54.730114 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:54.730114 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:54.730114 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:54.730515 master-0 kubenswrapper[7385]: I0319 09:26:54.730113 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:55.731595 master-0 kubenswrapper[7385]: I0319 09:26:55.731510 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:55.731595 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:55.731595 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:55.731595 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:55.732246 master-0 kubenswrapper[7385]: I0319 09:26:55.731619 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:56.731609 master-0 kubenswrapper[7385]: I0319 09:26:56.731558 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:56.731609 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:56.731609 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:56.731609 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:56.732372 master-0 kubenswrapper[7385]: I0319 09:26:56.732339 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:57.730718 master-0 kubenswrapper[7385]: I0319 09:26:57.730653 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:57.730718 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:57.730718 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:57.730718 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:57.731008 master-0 kubenswrapper[7385]: I0319 09:26:57.730730 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:58.731317 master-0 kubenswrapper[7385]: I0319 09:26:58.731229 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:58.731317 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:58.731317 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:58.731317 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:58.731317 master-0 kubenswrapper[7385]: I0319 09:26:58.731287 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:26:59.073403 master-0 kubenswrapper[7385]: I0319 09:26:59.073231 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 09:26:59.073679 master-0 kubenswrapper[7385]: E0319 09:26:59.073631 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc694742-6772-4a90-9f10-505d9e2eec3d" containerName="installer" Mar 19 09:26:59.073679 master-0 kubenswrapper[7385]: I0319 09:26:59.073653 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc694742-6772-4a90-9f10-505d9e2eec3d" containerName="installer" Mar 19 09:26:59.073869 master-0 kubenswrapper[7385]: I0319 09:26:59.073844 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc694742-6772-4a90-9f10-505d9e2eec3d" containerName="installer" Mar 19 09:26:59.074495 master-0 kubenswrapper[7385]: I0319 09:26:59.074455 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.076454 master-0 kubenswrapper[7385]: I0319 09:26:59.076391 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 09:26:59.077520 master-0 kubenswrapper[7385]: I0319 09:26:59.077485 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-j48rl" Mar 19 09:26:59.108267 master-0 kubenswrapper[7385]: I0319 09:26:59.108152 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 09:26:59.263895 master-0 kubenswrapper[7385]: I0319 09:26:59.263835 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.264305 master-0 kubenswrapper[7385]: I0319 09:26:59.264275 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3de6f41a-64ab-440c-b8e4-0e947045be07-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.264662 master-0 kubenswrapper[7385]: I0319 09:26:59.264639 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.366516 master-0 kubenswrapper[7385]: I0319 09:26:59.366426 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.366754 master-0 kubenswrapper[7385]: I0319 09:26:59.366700 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.366798 master-0 kubenswrapper[7385]: I0319 09:26:59.366740 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.366852 master-0 kubenswrapper[7385]: I0319 09:26:59.366751 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3de6f41a-64ab-440c-b8e4-0e947045be07-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.366894 master-0 kubenswrapper[7385]: I0319 09:26:59.366684 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.401136 master-0 kubenswrapper[7385]: I0319 09:26:59.401087 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3de6f41a-64ab-440c-b8e4-0e947045be07-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.701722 master-0 kubenswrapper[7385]: I0319 09:26:59.701563 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:26:59.731715 master-0 kubenswrapper[7385]: I0319 09:26:59.731663 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:26:59.731715 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:26:59.731715 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:26:59.731715 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:26:59.732821 master-0 kubenswrapper[7385]: I0319 09:26:59.731737 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:00.138875 master-0 kubenswrapper[7385]: I0319 09:27:00.138825 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 09:27:00.139259 master-0 kubenswrapper[7385]: W0319 09:27:00.139221 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3de6f41a_64ab_440c_b8e4_0e947045be07.slice/crio-f44617f2c8372d662c88e0e11139aa9c411605b5f6563263f4e85fcbc2405f87 WatchSource:0}: Error finding container f44617f2c8372d662c88e0e11139aa9c411605b5f6563263f4e85fcbc2405f87: Status 404 returned error can't find the container with id f44617f2c8372d662c88e0e11139aa9c411605b5f6563263f4e85fcbc2405f87 Mar 19 09:27:00.661952 master-0 kubenswrapper[7385]: I0319 09:27:00.661840 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3de6f41a-64ab-440c-b8e4-0e947045be07","Type":"ContainerStarted","Data":"f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc"} Mar 19 09:27:00.661952 master-0 kubenswrapper[7385]: I0319 09:27:00.661900 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3de6f41a-64ab-440c-b8e4-0e947045be07","Type":"ContainerStarted","Data":"f44617f2c8372d662c88e0e11139aa9c411605b5f6563263f4e85fcbc2405f87"} Mar 19 09:27:00.700150 master-0 kubenswrapper[7385]: I0319 09:27:00.699996 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=1.699971508 podStartE2EDuration="1.699971508s" podCreationTimestamp="2026-03-19 09:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:27:00.694609425 +0000 UTC m=+516.369039166" watchObservedRunningTime="2026-03-19 09:27:00.699971508 +0000 UTC m=+516.374401209" Mar 19 09:27:00.730693 master-0 kubenswrapper[7385]: I0319 09:27:00.730626 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:00.730693 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:00.730693 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:00.730693 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:00.730693 master-0 kubenswrapper[7385]: I0319 09:27:00.730690 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:01.730801 master-0 kubenswrapper[7385]: I0319 09:27:01.730742 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:01.730801 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:01.730801 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:01.730801 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:01.731670 master-0 kubenswrapper[7385]: I0319 09:27:01.730809 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:02.730617 master-0 kubenswrapper[7385]: I0319 09:27:02.730473 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:02.730617 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:02.730617 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:02.730617 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:02.730617 master-0 kubenswrapper[7385]: I0319 09:27:02.730611 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:03.730720 master-0 kubenswrapper[7385]: I0319 09:27:03.730653 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:03.730720 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:03.730720 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:03.730720 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:03.730720 master-0 kubenswrapper[7385]: I0319 09:27:03.730716 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:03.990448 master-0 kubenswrapper[7385]: E0319 09:27:03.990330 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:27:03.991386 master-0 kubenswrapper[7385]: E0319 09:27:03.991330 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:27:03.992430 master-0 kubenswrapper[7385]: E0319 09:27:03.992387 7385 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 19 09:27:03.992495 master-0 kubenswrapper[7385]: E0319 09:27:03.992431 7385 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" containerName="kube-multus-additional-cni-plugins" Mar 19 09:27:04.731226 master-0 kubenswrapper[7385]: I0319 09:27:04.731170 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:04.731226 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:04.731226 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:04.731226 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:04.732102 master-0 kubenswrapper[7385]: I0319 09:27:04.731245 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:05.296866 master-0 kubenswrapper[7385]: I0319 09:27:05.296816 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 09:27:05.297309 master-0 kubenswrapper[7385]: I0319 09:27:05.297278 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="3de6f41a-64ab-440c-b8e4-0e947045be07" containerName="installer" containerID="cri-o://f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc" gracePeriod=30 Mar 19 09:27:05.730524 master-0 kubenswrapper[7385]: I0319 09:27:05.730458 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:05.730524 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:05.730524 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:05.730524 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:05.730524 master-0 kubenswrapper[7385]: I0319 09:27:05.730527 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:05.731005 master-0 kubenswrapper[7385]: I0319 09:27:05.730641 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:27:05.731235 master-0 kubenswrapper[7385]: I0319 09:27:05.731185 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"3a5dd314e61c7e5e336d52053d0330f63d21f00e76686c7b0a177fb71dc220dc"} pod="openshift-ingress/router-default-7dcf5569b5-k99cg" containerMessage="Container router failed startup probe, will be restarted" Mar 19 09:27:05.731235 master-0 kubenswrapper[7385]: I0319 09:27:05.731225 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" containerID="cri-o://3a5dd314e61c7e5e336d52053d0330f63d21f00e76686c7b0a177fb71dc220dc" gracePeriod=3600 Mar 19 09:27:06.487769 master-0 kubenswrapper[7385]: I0319 09:27:06.487660 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 19 09:27:06.489062 master-0 kubenswrapper[7385]: I0319 09:27:06.489010 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.491166 master-0 kubenswrapper[7385]: I0319 09:27:06.491106 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-444qc" Mar 19 09:27:06.491166 master-0 kubenswrapper[7385]: I0319 09:27:06.491164 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 19 09:27:06.540690 master-0 kubenswrapper[7385]: I0319 09:27:06.540587 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 19 09:27:06.671585 master-0 kubenswrapper[7385]: I0319 09:27:06.671491 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.671840 master-0 kubenswrapper[7385]: I0319 09:27:06.671799 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-var-lock\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.671944 master-0 kubenswrapper[7385]: I0319 09:27:06.671906 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d04277ae-5881-4ce1-9157-d58f93a5f116-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.773733 master-0 kubenswrapper[7385]: I0319 09:27:06.773537 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.773733 master-0 kubenswrapper[7385]: I0319 09:27:06.773687 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-var-lock\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.773733 master-0 kubenswrapper[7385]: I0319 09:27:06.773713 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d04277ae-5881-4ce1-9157-d58f93a5f116-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.774451 master-0 kubenswrapper[7385]: I0319 09:27:06.773936 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-var-lock\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.774451 master-0 kubenswrapper[7385]: I0319 09:27:06.773981 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:06.814023 master-0 kubenswrapper[7385]: I0319 09:27:06.813963 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d04277ae-5881-4ce1-9157-d58f93a5f116-kube-api-access\") pod \"installer-4-master-0\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:07.111250 master-0 kubenswrapper[7385]: I0319 09:27:07.111186 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:07.516386 master-0 kubenswrapper[7385]: I0319 09:27:07.516258 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-nbj5j_b5770097-96ec-4b21-ac96-26bf027850bf/kube-multus-additional-cni-plugins/0.log" Mar 19 09:27:07.516386 master-0 kubenswrapper[7385]: I0319 09:27:07.516341 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:27:07.608862 master-0 kubenswrapper[7385]: I0319 09:27:07.608727 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 19 09:27:07.687366 master-0 kubenswrapper[7385]: I0319 09:27:07.687311 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5770097-96ec-4b21-ac96-26bf027850bf-cni-sysctl-allowlist\") pod \"b5770097-96ec-4b21-ac96-26bf027850bf\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " Mar 19 09:27:07.687618 master-0 kubenswrapper[7385]: I0319 09:27:07.687402 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b5770097-96ec-4b21-ac96-26bf027850bf-ready\") pod \"b5770097-96ec-4b21-ac96-26bf027850bf\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " Mar 19 09:27:07.687618 master-0 kubenswrapper[7385]: I0319 09:27:07.687495 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5770097-96ec-4b21-ac96-26bf027850bf-tuning-conf-dir\") pod \"b5770097-96ec-4b21-ac96-26bf027850bf\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " Mar 19 09:27:07.687618 master-0 kubenswrapper[7385]: I0319 09:27:07.687583 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grxgp\" (UniqueName: \"kubernetes.io/projected/b5770097-96ec-4b21-ac96-26bf027850bf-kube-api-access-grxgp\") pod \"b5770097-96ec-4b21-ac96-26bf027850bf\" (UID: \"b5770097-96ec-4b21-ac96-26bf027850bf\") " Mar 19 09:27:07.687763 master-0 kubenswrapper[7385]: I0319 09:27:07.687723 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5770097-96ec-4b21-ac96-26bf027850bf-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "b5770097-96ec-4b21-ac96-26bf027850bf" (UID: "b5770097-96ec-4b21-ac96-26bf027850bf"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:07.687935 master-0 kubenswrapper[7385]: I0319 09:27:07.687895 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5770097-96ec-4b21-ac96-26bf027850bf-ready" (OuterVolumeSpecName: "ready") pod "b5770097-96ec-4b21-ac96-26bf027850bf" (UID: "b5770097-96ec-4b21-ac96-26bf027850bf"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:27:07.688059 master-0 kubenswrapper[7385]: I0319 09:27:07.688024 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5770097-96ec-4b21-ac96-26bf027850bf-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "b5770097-96ec-4b21-ac96-26bf027850bf" (UID: "b5770097-96ec-4b21-ac96-26bf027850bf"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:27:07.688839 master-0 kubenswrapper[7385]: I0319 09:27:07.688817 7385 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b5770097-96ec-4b21-ac96-26bf027850bf-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:07.688975 master-0 kubenswrapper[7385]: I0319 09:27:07.688841 7385 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/b5770097-96ec-4b21-ac96-26bf027850bf-ready\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:07.688975 master-0 kubenswrapper[7385]: I0319 09:27:07.688853 7385 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b5770097-96ec-4b21-ac96-26bf027850bf-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:07.691474 master-0 kubenswrapper[7385]: I0319 09:27:07.691440 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5770097-96ec-4b21-ac96-26bf027850bf-kube-api-access-grxgp" (OuterVolumeSpecName: "kube-api-access-grxgp") pod "b5770097-96ec-4b21-ac96-26bf027850bf" (UID: "b5770097-96ec-4b21-ac96-26bf027850bf"). InnerVolumeSpecName "kube-api-access-grxgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:07.703183 master-0 kubenswrapper[7385]: I0319 09:27:07.703135 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"d04277ae-5881-4ce1-9157-d58f93a5f116","Type":"ContainerStarted","Data":"b6f9f00de289e7567b86de252b9a6b1c229d174535eede65fb885a8a83fa2393"} Mar 19 09:27:07.706176 master-0 kubenswrapper[7385]: I0319 09:27:07.706148 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-nbj5j_b5770097-96ec-4b21-ac96-26bf027850bf/kube-multus-additional-cni-plugins/0.log" Mar 19 09:27:07.706229 master-0 kubenswrapper[7385]: I0319 09:27:07.706194 7385 generic.go:334] "Generic (PLEG): container finished" podID="b5770097-96ec-4b21-ac96-26bf027850bf" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" exitCode=137 Mar 19 09:27:07.706229 master-0 kubenswrapper[7385]: I0319 09:27:07.706222 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" event={"ID":"b5770097-96ec-4b21-ac96-26bf027850bf","Type":"ContainerDied","Data":"04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4"} Mar 19 09:27:07.706310 master-0 kubenswrapper[7385]: I0319 09:27:07.706245 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" event={"ID":"b5770097-96ec-4b21-ac96-26bf027850bf","Type":"ContainerDied","Data":"0055cb4c6c131d6dc20d04d7968a8349f3a6e11d95d2bd7dcfb09f401f66d5f9"} Mar 19 09:27:07.706310 master-0 kubenswrapper[7385]: I0319 09:27:07.706248 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-nbj5j" Mar 19 09:27:07.706310 master-0 kubenswrapper[7385]: I0319 09:27:07.706263 7385 scope.go:117] "RemoveContainer" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" Mar 19 09:27:07.719094 master-0 kubenswrapper[7385]: I0319 09:27:07.718909 7385 scope.go:117] "RemoveContainer" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" Mar 19 09:27:07.719449 master-0 kubenswrapper[7385]: E0319 09:27:07.719319 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4\": container with ID starting with 04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4 not found: ID does not exist" containerID="04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4" Mar 19 09:27:07.719449 master-0 kubenswrapper[7385]: I0319 09:27:07.719354 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4"} err="failed to get container status \"04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4\": rpc error: code = NotFound desc = could not find container \"04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4\": container with ID starting with 04c5f61cf575ffafb5379c6e1bb062eb5058a239d0c3ef63db90825bdabc0cf4 not found: ID does not exist" Mar 19 09:27:07.789929 master-0 kubenswrapper[7385]: I0319 09:27:07.789896 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grxgp\" (UniqueName: \"kubernetes.io/projected/b5770097-96ec-4b21-ac96-26bf027850bf-kube-api-access-grxgp\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:07.936626 master-0 kubenswrapper[7385]: I0319 09:27:07.936510 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-nbj5j"] Mar 19 09:27:07.952736 master-0 kubenswrapper[7385]: I0319 09:27:07.951431 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-nbj5j"] Mar 19 09:27:08.538059 master-0 kubenswrapper[7385]: I0319 09:27:08.538003 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" path="/var/lib/kubelet/pods/b5770097-96ec-4b21-ac96-26bf027850bf/volumes" Mar 19 09:27:08.720207 master-0 kubenswrapper[7385]: I0319 09:27:08.720075 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"d04277ae-5881-4ce1-9157-d58f93a5f116","Type":"ContainerStarted","Data":"ef45d184d3bd520a4e4cf7302b2fbd38a0a7146b58fcce765edbe1eaa24e7615"} Mar 19 09:27:09.923347 master-0 kubenswrapper[7385]: I0319 09:27:09.923270 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=3.923250019 podStartE2EDuration="3.923250019s" podCreationTimestamp="2026-03-19 09:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:27:09.11821255 +0000 UTC m=+524.792642251" watchObservedRunningTime="2026-03-19 09:27:09.923250019 +0000 UTC m=+525.597679740" Mar 19 09:27:09.925440 master-0 kubenswrapper[7385]: I0319 09:27:09.925401 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 09:27:09.925755 master-0 kubenswrapper[7385]: E0319 09:27:09.925729 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" containerName="kube-multus-additional-cni-plugins" Mar 19 09:27:09.925755 master-0 kubenswrapper[7385]: I0319 09:27:09.925756 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" containerName="kube-multus-additional-cni-plugins" Mar 19 09:27:09.925938 master-0 kubenswrapper[7385]: I0319 09:27:09.925917 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5770097-96ec-4b21-ac96-26bf027850bf" containerName="kube-multus-additional-cni-plugins" Mar 19 09:27:09.926460 master-0 kubenswrapper[7385]: I0319 09:27:09.926433 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.025300 master-0 kubenswrapper[7385]: I0319 09:27:10.025248 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c31ae14-dacc-459d-ba1b-010648e9976b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.025515 master-0 kubenswrapper[7385]: I0319 09:27:10.025430 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.025592 master-0 kubenswrapper[7385]: I0319 09:27:10.025530 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-var-lock\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.039480 master-0 kubenswrapper[7385]: I0319 09:27:10.038420 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 09:27:10.126412 master-0 kubenswrapper[7385]: I0319 09:27:10.126360 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.126412 master-0 kubenswrapper[7385]: I0319 09:27:10.126411 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-var-lock\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.126648 master-0 kubenswrapper[7385]: I0319 09:27:10.126438 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c31ae14-dacc-459d-ba1b-010648e9976b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.126648 master-0 kubenswrapper[7385]: I0319 09:27:10.126496 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.126713 master-0 kubenswrapper[7385]: I0319 09:27:10.126637 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-var-lock\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.362820 master-0 kubenswrapper[7385]: I0319 09:27:10.362756 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c31ae14-dacc-459d-ba1b-010648e9976b-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.543376 master-0 kubenswrapper[7385]: I0319 09:27:10.543342 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:10.983245 master-0 kubenswrapper[7385]: I0319 09:27:10.983111 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 09:27:10.991805 master-0 kubenswrapper[7385]: W0319 09:27:10.991714 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6c31ae14_dacc_459d_ba1b_010648e9976b.slice/crio-34e723440b683bb3bdb6fa73ea58ddec6aab0abaa158203af6c01277fdf358d9 WatchSource:0}: Error finding container 34e723440b683bb3bdb6fa73ea58ddec6aab0abaa158203af6c01277fdf358d9: Status 404 returned error can't find the container with id 34e723440b683bb3bdb6fa73ea58ddec6aab0abaa158203af6c01277fdf358d9 Mar 19 09:27:11.740153 master-0 kubenswrapper[7385]: I0319 09:27:11.740018 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"6c31ae14-dacc-459d-ba1b-010648e9976b","Type":"ContainerStarted","Data":"04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396"} Mar 19 09:27:11.740153 master-0 kubenswrapper[7385]: I0319 09:27:11.740066 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"6c31ae14-dacc-459d-ba1b-010648e9976b","Type":"ContainerStarted","Data":"34e723440b683bb3bdb6fa73ea58ddec6aab0abaa158203af6c01277fdf358d9"} Mar 19 09:27:11.808112 master-0 kubenswrapper[7385]: I0319 09:27:11.808034 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.808016072 podStartE2EDuration="2.808016072s" podCreationTimestamp="2026-03-19 09:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:27:11.805425953 +0000 UTC m=+527.479855664" watchObservedRunningTime="2026-03-19 09:27:11.808016072 +0000 UTC m=+527.482445783" Mar 19 09:27:13.668176 master-0 kubenswrapper[7385]: I0319 09:27:13.668119 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 19 09:27:13.668929 master-0 kubenswrapper[7385]: I0319 09:27:13.668894 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.670737 master-0 kubenswrapper[7385]: I0319 09:27:13.670690 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-9cppt" Mar 19 09:27:13.672153 master-0 kubenswrapper[7385]: I0319 09:27:13.672111 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 19 09:27:13.736752 master-0 kubenswrapper[7385]: I0319 09:27:13.736667 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 19 09:27:13.784222 master-0 kubenswrapper[7385]: I0319 09:27:13.784148 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5780efa-c56a-4953-807f-6a51efc91b09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.784437 master-0 kubenswrapper[7385]: I0319 09:27:13.784241 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-var-lock\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.784489 master-0 kubenswrapper[7385]: I0319 09:27:13.784418 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.885598 master-0 kubenswrapper[7385]: I0319 09:27:13.885523 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.885860 master-0 kubenswrapper[7385]: I0319 09:27:13.885645 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5780efa-c56a-4953-807f-6a51efc91b09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.885860 master-0 kubenswrapper[7385]: I0319 09:27:13.885647 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.885860 master-0 kubenswrapper[7385]: I0319 09:27:13.885761 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-var-lock\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:13.885984 master-0 kubenswrapper[7385]: I0319 09:27:13.885867 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-var-lock\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:14.039334 master-0 kubenswrapper[7385]: I0319 09:27:14.039223 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5780efa-c56a-4953-807f-6a51efc91b09-kube-api-access\") pod \"installer-2-master-0\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:14.286635 master-0 kubenswrapper[7385]: I0319 09:27:14.286569 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 09:27:14.684434 master-0 kubenswrapper[7385]: I0319 09:27:14.684384 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 19 09:27:14.689645 master-0 kubenswrapper[7385]: W0319 09:27:14.689520 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode5780efa_c56a_4953_807f_6a51efc91b09.slice/crio-41308b73b0cd59e2ebd3a9e2ccbd13c59e32ef712883338ba6a663fd6955d3dc WatchSource:0}: Error finding container 41308b73b0cd59e2ebd3a9e2ccbd13c59e32ef712883338ba6a663fd6955d3dc: Status 404 returned error can't find the container with id 41308b73b0cd59e2ebd3a9e2ccbd13c59e32ef712883338ba6a663fd6955d3dc Mar 19 09:27:14.761206 master-0 kubenswrapper[7385]: I0319 09:27:14.761098 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e5780efa-c56a-4953-807f-6a51efc91b09","Type":"ContainerStarted","Data":"41308b73b0cd59e2ebd3a9e2ccbd13c59e32ef712883338ba6a663fd6955d3dc"} Mar 19 09:27:15.768328 master-0 kubenswrapper[7385]: I0319 09:27:15.768125 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-q4rkm_3816f149-ddce-41c8-a540-fe866ee71c5e/multus-admission-controller/0.log" Mar 19 09:27:15.768328 master-0 kubenswrapper[7385]: I0319 09:27:15.768186 7385 generic.go:334] "Generic (PLEG): container finished" podID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerID="878e0d63701a1caf794ebb2ed5a4a759d206a20246066ad1acd5bdfd53aa835e" exitCode=137 Mar 19 09:27:15.768328 master-0 kubenswrapper[7385]: I0319 09:27:15.768251 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" event={"ID":"3816f149-ddce-41c8-a540-fe866ee71c5e","Type":"ContainerDied","Data":"878e0d63701a1caf794ebb2ed5a4a759d206a20246066ad1acd5bdfd53aa835e"} Mar 19 09:27:15.769768 master-0 kubenswrapper[7385]: I0319 09:27:15.769729 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e5780efa-c56a-4953-807f-6a51efc91b09","Type":"ContainerStarted","Data":"bf5e3834612c0d4b8b32cfe23c6154f92dbaf9ab5151f44cf79b7b61c3d85739"} Mar 19 09:27:15.813175 master-0 kubenswrapper[7385]: I0319 09:27:15.813002 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.812956276 podStartE2EDuration="2.812956276s" podCreationTimestamp="2026-03-19 09:27:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:27:15.807101999 +0000 UTC m=+531.481531740" watchObservedRunningTime="2026-03-19 09:27:15.812956276 +0000 UTC m=+531.487386047" Mar 19 09:27:16.309738 master-0 kubenswrapper[7385]: I0319 09:27:16.309692 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-q4rkm_3816f149-ddce-41c8-a540-fe866ee71c5e/multus-admission-controller/0.log" Mar 19 09:27:16.309919 master-0 kubenswrapper[7385]: I0319 09:27:16.309762 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:27:16.424490 master-0 kubenswrapper[7385]: I0319 09:27:16.424337 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7plsz\" (UniqueName: \"kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz\") pod \"3816f149-ddce-41c8-a540-fe866ee71c5e\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " Mar 19 09:27:16.424995 master-0 kubenswrapper[7385]: I0319 09:27:16.424955 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") pod \"3816f149-ddce-41c8-a540-fe866ee71c5e\" (UID: \"3816f149-ddce-41c8-a540-fe866ee71c5e\") " Mar 19 09:27:16.427434 master-0 kubenswrapper[7385]: I0319 09:27:16.427373 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "3816f149-ddce-41c8-a540-fe866ee71c5e" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:27:16.427835 master-0 kubenswrapper[7385]: I0319 09:27:16.427766 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz" (OuterVolumeSpecName: "kube-api-access-7plsz") pod "3816f149-ddce-41c8-a540-fe866ee71c5e" (UID: "3816f149-ddce-41c8-a540-fe866ee71c5e"). InnerVolumeSpecName "kube-api-access-7plsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:16.526853 master-0 kubenswrapper[7385]: I0319 09:27:16.526808 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7plsz\" (UniqueName: \"kubernetes.io/projected/3816f149-ddce-41c8-a540-fe866ee71c5e-kube-api-access-7plsz\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:16.526853 master-0 kubenswrapper[7385]: I0319 09:27:16.526848 7385 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/3816f149-ddce-41c8-a540-fe866ee71c5e-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:16.783338 master-0 kubenswrapper[7385]: I0319 09:27:16.783218 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-q4rkm_3816f149-ddce-41c8-a540-fe866ee71c5e/multus-admission-controller/0.log" Mar 19 09:27:16.783943 master-0 kubenswrapper[7385]: I0319 09:27:16.783358 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" Mar 19 09:27:16.783943 master-0 kubenswrapper[7385]: I0319 09:27:16.783411 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm" event={"ID":"3816f149-ddce-41c8-a540-fe866ee71c5e","Type":"ContainerDied","Data":"61558dca744350def3b0a516cd7192d3505c64b58643571bb0e2e07f06bffb85"} Mar 19 09:27:16.783943 master-0 kubenswrapper[7385]: I0319 09:27:16.783452 7385 scope.go:117] "RemoveContainer" containerID="a0def10435beba37cc4f2c51d6d95e5b8b0c440dcd92fc57f96ff4a342fc9bce" Mar 19 09:27:16.822146 master-0 kubenswrapper[7385]: I0319 09:27:16.822093 7385 scope.go:117] "RemoveContainer" containerID="878e0d63701a1caf794ebb2ed5a4a759d206a20246066ad1acd5bdfd53aa835e" Mar 19 09:27:17.058470 master-0 kubenswrapper[7385]: I0319 09:27:17.058322 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm"] Mar 19 09:27:17.075194 master-0 kubenswrapper[7385]: I0319 09:27:17.075130 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-q4rkm"] Mar 19 09:27:18.541884 master-0 kubenswrapper[7385]: I0319 09:27:18.541832 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" path="/var/lib/kubelet/pods/3816f149-ddce-41c8-a540-fe866ee71c5e/volumes" Mar 19 09:27:23.563312 master-0 kubenswrapper[7385]: I0319 09:27:23.563255 7385 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 09:27:23.563882 master-0 kubenswrapper[7385]: I0319 09:27:23.563575 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://45039086b1bdf7c8b135828088ebf13ff393c5333b5272f1cf3328f195ddea5b" gracePeriod=30 Mar 19 09:27:23.563882 master-0 kubenswrapper[7385]: I0319 09:27:23.563630 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://56d983e0cc6cd1122ae8e1e8f833654b8157419cc9f034610ad57896ed648267" gracePeriod=30 Mar 19 09:27:23.564970 master-0 kubenswrapper[7385]: I0319 09:27:23.564939 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:27:23.565290 master-0 kubenswrapper[7385]: E0319 09:27:23.565276 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.565353 master-0 kubenswrapper[7385]: I0319 09:27:23.565343 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.565414 master-0 kubenswrapper[7385]: E0319 09:27:23.565405 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="multus-admission-controller" Mar 19 09:27:23.565468 master-0 kubenswrapper[7385]: I0319 09:27:23.565460 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="multus-admission-controller" Mar 19 09:27:23.565534 master-0 kubenswrapper[7385]: E0319 09:27:23.565525 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.565660 master-0 kubenswrapper[7385]: I0319 09:27:23.565649 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.565734 master-0 kubenswrapper[7385]: E0319 09:27:23.565724 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="kube-rbac-proxy" Mar 19 09:27:23.565789 master-0 kubenswrapper[7385]: I0319 09:27:23.565780 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="kube-rbac-proxy" Mar 19 09:27:23.565849 master-0 kubenswrapper[7385]: E0319 09:27:23.565840 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 19 09:27:23.565901 master-0 kubenswrapper[7385]: I0319 09:27:23.565892 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 19 09:27:23.566073 master-0 kubenswrapper[7385]: I0319 09:27:23.566061 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.566138 master-0 kubenswrapper[7385]: I0319 09:27:23.566129 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 19 09:27:23.566198 master-0 kubenswrapper[7385]: I0319 09:27:23.566189 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="multus-admission-controller" Mar 19 09:27:23.566253 master-0 kubenswrapper[7385]: I0319 09:27:23.566244 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="3816f149-ddce-41c8-a540-fe866ee71c5e" containerName="kube-rbac-proxy" Mar 19 09:27:23.566308 master-0 kubenswrapper[7385]: I0319 09:27:23.566299 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.566365 master-0 kubenswrapper[7385]: I0319 09:27:23.566356 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.566534 master-0 kubenswrapper[7385]: E0319 09:27:23.566522 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.566627 master-0 kubenswrapper[7385]: I0319 09:27:23.566616 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 19 09:27:23.571728 master-0 kubenswrapper[7385]: I0319 09:27:23.571693 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:23.667254 master-0 kubenswrapper[7385]: I0319 09:27:23.667177 7385 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 09:27:23.693297 master-0 kubenswrapper[7385]: I0319 09:27:23.693236 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:27:23.723160 master-0 kubenswrapper[7385]: I0319 09:27:23.723122 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:23.723314 master-0 kubenswrapper[7385]: I0319 09:27:23.723237 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:23.739289 master-0 kubenswrapper[7385]: I0319 09:27:23.739266 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:27:23.760020 master-0 kubenswrapper[7385]: I0319 09:27:23.759708 7385 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="c2befda3-c72f-4895-9f12-fd1aba320ac5" Mar 19 09:27:23.824325 master-0 kubenswrapper[7385]: I0319 09:27:23.824214 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 09:27:23.824325 master-0 kubenswrapper[7385]: I0319 09:27:23.824263 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 09:27:23.824325 master-0 kubenswrapper[7385]: I0319 09:27:23.824297 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:23.824325 master-0 kubenswrapper[7385]: I0319 09:27:23.824329 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 09:27:23.824601 master-0 kubenswrapper[7385]: I0319 09:27:23.824342 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:23.824601 master-0 kubenswrapper[7385]: I0319 09:27:23.824354 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 09:27:23.824601 master-0 kubenswrapper[7385]: I0319 09:27:23.824362 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:23.824601 master-0 kubenswrapper[7385]: I0319 09:27:23.824400 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 19 09:27:23.824601 master-0 kubenswrapper[7385]: I0319 09:27:23.824448 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:23.824601 master-0 kubenswrapper[7385]: I0319 09:27:23.824534 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:23.824601 master-0 kubenswrapper[7385]: I0319 09:27:23.824591 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:23.824812 master-0 kubenswrapper[7385]: I0319 09:27:23.824534 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:23.824812 master-0 kubenswrapper[7385]: I0319 09:27:23.824721 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:23.824812 master-0 kubenswrapper[7385]: I0319 09:27:23.824800 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:23.824916 master-0 kubenswrapper[7385]: I0319 09:27:23.824826 7385 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:23.824916 master-0 kubenswrapper[7385]: I0319 09:27:23.824839 7385 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:23.824916 master-0 kubenswrapper[7385]: I0319 09:27:23.824848 7385 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:23.824916 master-0 kubenswrapper[7385]: I0319 09:27:23.824801 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:23.824916 master-0 kubenswrapper[7385]: I0319 09:27:23.824858 7385 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:23.825770 master-0 kubenswrapper[7385]: I0319 09:27:23.825738 7385 generic.go:334] "Generic (PLEG): container finished" podID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerID="d34b15333e7215221eb3166bafa905cc720923c5b54182dc9d2d804528d9b642" exitCode=0 Mar 19 09:27:23.825831 master-0 kubenswrapper[7385]: I0319 09:27:23.825812 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"925be58b-a4e2-448b-afb4-4b4d689ae64c","Type":"ContainerDied","Data":"d34b15333e7215221eb3166bafa905cc720923c5b54182dc9d2d804528d9b642"} Mar 19 09:27:23.828658 master-0 kubenswrapper[7385]: I0319 09:27:23.828632 7385 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="56d983e0cc6cd1122ae8e1e8f833654b8157419cc9f034610ad57896ed648267" exitCode=0 Mar 19 09:27:23.828658 master-0 kubenswrapper[7385]: I0319 09:27:23.828654 7385 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="45039086b1bdf7c8b135828088ebf13ff393c5333b5272f1cf3328f195ddea5b" exitCode=0 Mar 19 09:27:23.828782 master-0 kubenswrapper[7385]: I0319 09:27:23.828742 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 19 09:27:23.828860 master-0 kubenswrapper[7385]: I0319 09:27:23.828821 7385 scope.go:117] "RemoveContainer" containerID="58961d9e0be486e46715cb6bf5872c6474bbf247fb8ed12ba8931d59b7f9e590" Mar 19 09:27:23.830737 master-0 kubenswrapper[7385]: I0319 09:27:23.830708 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa35edd2b62a9ce73580ab7abdcbc7340687984005692e991be9188ca04f7aa2" Mar 19 09:27:23.990715 master-0 kubenswrapper[7385]: I0319 09:27:23.990654 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:24.019137 master-0 kubenswrapper[7385]: W0319 09:27:24.019087 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67658b93f6f5927402b87ec35623e46e.slice/crio-8911f4949ad2b1026cf67388b4c856ca207ee327d335f0a0ffbddeb06f138626 WatchSource:0}: Error finding container 8911f4949ad2b1026cf67388b4c856ca207ee327d335f0a0ffbddeb06f138626: Status 404 returned error can't find the container with id 8911f4949ad2b1026cf67388b4c856ca207ee327d335f0a0ffbddeb06f138626 Mar 19 09:27:24.538037 master-0 kubenswrapper[7385]: I0319 09:27:24.537983 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 19 09:27:24.538465 master-0 kubenswrapper[7385]: I0319 09:27:24.538417 7385 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 19 09:27:24.570998 master-0 kubenswrapper[7385]: I0319 09:27:24.568256 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 09:27:24.570998 master-0 kubenswrapper[7385]: I0319 09:27:24.568304 7385 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="c2befda3-c72f-4895-9f12-fd1aba320ac5" Mar 19 09:27:24.571938 master-0 kubenswrapper[7385]: I0319 09:27:24.571880 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 19 09:27:24.572004 master-0 kubenswrapper[7385]: I0319 09:27:24.571935 7385 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="c2befda3-c72f-4895-9f12-fd1aba320ac5" Mar 19 09:27:24.839610 master-0 kubenswrapper[7385]: I0319 09:27:24.839534 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"9e6e66f3c8a2bf098bd2b9e3696054dca0c7f2ceb70c56ac6de3f703458cdd0e"} Mar 19 09:27:24.839610 master-0 kubenswrapper[7385]: I0319 09:27:24.839598 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"65e7eeeadf0553dafef845c5c629e20ef18e102a3f0f7e94e025271877410b78"} Mar 19 09:27:24.839610 master-0 kubenswrapper[7385]: I0319 09:27:24.839612 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"57d853d1ec8afcb012b4bb8c0bf03fdeac8c6cbef5eb24aa2fea3d5801611fb9"} Mar 19 09:27:24.839610 master-0 kubenswrapper[7385]: I0319 09:27:24.839622 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"8911f4949ad2b1026cf67388b4c856ca207ee327d335f0a0ffbddeb06f138626"} Mar 19 09:27:25.008095 master-0 kubenswrapper[7385]: I0319 09:27:25.007861 7385 scope.go:117] "RemoveContainer" containerID="56d983e0cc6cd1122ae8e1e8f833654b8157419cc9f034610ad57896ed648267" Mar 19 09:27:25.025112 master-0 kubenswrapper[7385]: I0319 09:27:25.025053 7385 scope.go:117] "RemoveContainer" containerID="45039086b1bdf7c8b135828088ebf13ff393c5333b5272f1cf3328f195ddea5b" Mar 19 09:27:25.119318 master-0 kubenswrapper[7385]: I0319 09:27:25.119277 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:27:25.243623 master-0 kubenswrapper[7385]: I0319 09:27:25.242957 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/925be58b-a4e2-448b-afb4-4b4d689ae64c-kube-api-access\") pod \"925be58b-a4e2-448b-afb4-4b4d689ae64c\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " Mar 19 09:27:25.243623 master-0 kubenswrapper[7385]: I0319 09:27:25.243144 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-var-lock\") pod \"925be58b-a4e2-448b-afb4-4b4d689ae64c\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " Mar 19 09:27:25.243623 master-0 kubenswrapper[7385]: I0319 09:27:25.243206 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-kubelet-dir\") pod \"925be58b-a4e2-448b-afb4-4b4d689ae64c\" (UID: \"925be58b-a4e2-448b-afb4-4b4d689ae64c\") " Mar 19 09:27:25.243623 master-0 kubenswrapper[7385]: I0319 09:27:25.243311 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-var-lock" (OuterVolumeSpecName: "var-lock") pod "925be58b-a4e2-448b-afb4-4b4d689ae64c" (UID: "925be58b-a4e2-448b-afb4-4b4d689ae64c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:25.243623 master-0 kubenswrapper[7385]: I0319 09:27:25.243355 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "925be58b-a4e2-448b-afb4-4b4d689ae64c" (UID: "925be58b-a4e2-448b-afb4-4b4d689ae64c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:25.243623 master-0 kubenswrapper[7385]: I0319 09:27:25.243573 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:25.243623 master-0 kubenswrapper[7385]: I0319 09:27:25.243591 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/925be58b-a4e2-448b-afb4-4b4d689ae64c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:25.252175 master-0 kubenswrapper[7385]: I0319 09:27:25.251998 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925be58b-a4e2-448b-afb4-4b4d689ae64c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "925be58b-a4e2-448b-afb4-4b4d689ae64c" (UID: "925be58b-a4e2-448b-afb4-4b4d689ae64c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:25.271809 master-0 kubenswrapper[7385]: I0319 09:27:25.269268 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 09:27:25.271809 master-0 kubenswrapper[7385]: I0319 09:27:25.269623 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="6c31ae14-dacc-459d-ba1b-010648e9976b" containerName="installer" containerID="cri-o://04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396" gracePeriod=30 Mar 19 09:27:25.344566 master-0 kubenswrapper[7385]: I0319 09:27:25.344496 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/925be58b-a4e2-448b-afb4-4b4d689ae64c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:25.646142 master-0 kubenswrapper[7385]: I0319 09:27:25.646068 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_6c31ae14-dacc-459d-ba1b-010648e9976b/installer/0.log" Mar 19 09:27:25.646142 master-0 kubenswrapper[7385]: I0319 09:27:25.646140 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:25.750868 master-0 kubenswrapper[7385]: I0319 09:27:25.750763 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-var-lock\") pod \"6c31ae14-dacc-459d-ba1b-010648e9976b\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " Mar 19 09:27:25.750868 master-0 kubenswrapper[7385]: I0319 09:27:25.750849 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-kubelet-dir\") pod \"6c31ae14-dacc-459d-ba1b-010648e9976b\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " Mar 19 09:27:25.750868 master-0 kubenswrapper[7385]: I0319 09:27:25.750892 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c31ae14-dacc-459d-ba1b-010648e9976b-kube-api-access\") pod \"6c31ae14-dacc-459d-ba1b-010648e9976b\" (UID: \"6c31ae14-dacc-459d-ba1b-010648e9976b\") " Mar 19 09:27:25.751389 master-0 kubenswrapper[7385]: I0319 09:27:25.751012 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6c31ae14-dacc-459d-ba1b-010648e9976b" (UID: "6c31ae14-dacc-459d-ba1b-010648e9976b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:25.751389 master-0 kubenswrapper[7385]: I0319 09:27:25.751037 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-var-lock" (OuterVolumeSpecName: "var-lock") pod "6c31ae14-dacc-459d-ba1b-010648e9976b" (UID: "6c31ae14-dacc-459d-ba1b-010648e9976b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:25.751581 master-0 kubenswrapper[7385]: I0319 09:27:25.751402 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:25.751581 master-0 kubenswrapper[7385]: I0319 09:27:25.751425 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c31ae14-dacc-459d-ba1b-010648e9976b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:25.753733 master-0 kubenswrapper[7385]: I0319 09:27:25.753675 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c31ae14-dacc-459d-ba1b-010648e9976b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6c31ae14-dacc-459d-ba1b-010648e9976b" (UID: "6c31ae14-dacc-459d-ba1b-010648e9976b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:25.846643 master-0 kubenswrapper[7385]: I0319 09:27:25.846594 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_6c31ae14-dacc-459d-ba1b-010648e9976b/installer/0.log" Mar 19 09:27:25.846826 master-0 kubenswrapper[7385]: I0319 09:27:25.846656 7385 generic.go:334] "Generic (PLEG): container finished" podID="6c31ae14-dacc-459d-ba1b-010648e9976b" containerID="04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396" exitCode=1 Mar 19 09:27:25.846826 master-0 kubenswrapper[7385]: I0319 09:27:25.846695 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"6c31ae14-dacc-459d-ba1b-010648e9976b","Type":"ContainerDied","Data":"04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396"} Mar 19 09:27:25.846826 master-0 kubenswrapper[7385]: I0319 09:27:25.846748 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"6c31ae14-dacc-459d-ba1b-010648e9976b","Type":"ContainerDied","Data":"34e723440b683bb3bdb6fa73ea58ddec6aab0abaa158203af6c01277fdf358d9"} Mar 19 09:27:25.846826 master-0 kubenswrapper[7385]: I0319 09:27:25.846772 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 19 09:27:25.847012 master-0 kubenswrapper[7385]: I0319 09:27:25.846779 7385 scope.go:117] "RemoveContainer" containerID="04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396" Mar 19 09:27:25.850208 master-0 kubenswrapper[7385]: I0319 09:27:25.850163 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"759c350f58d15a9fddc9b4b3e95d92a6db8fdfb3e82a8bd183c2d36ff84c76ed"} Mar 19 09:27:25.851844 master-0 kubenswrapper[7385]: I0319 09:27:25.851613 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"925be58b-a4e2-448b-afb4-4b4d689ae64c","Type":"ContainerDied","Data":"2b815bb5a4f3237642901cf478d08543a7c45d3f20aa5aa587a69d0647d632b8"} Mar 19 09:27:25.851844 master-0 kubenswrapper[7385]: I0319 09:27:25.851645 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b815bb5a4f3237642901cf478d08543a7c45d3f20aa5aa587a69d0647d632b8" Mar 19 09:27:25.851844 master-0 kubenswrapper[7385]: I0319 09:27:25.851684 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:27:25.852694 master-0 kubenswrapper[7385]: I0319 09:27:25.852439 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c31ae14-dacc-459d-ba1b-010648e9976b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:25.862991 master-0 kubenswrapper[7385]: I0319 09:27:25.862956 7385 scope.go:117] "RemoveContainer" containerID="04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396" Mar 19 09:27:25.863449 master-0 kubenswrapper[7385]: E0319 09:27:25.863397 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396\": container with ID starting with 04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396 not found: ID does not exist" containerID="04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396" Mar 19 09:27:25.863524 master-0 kubenswrapper[7385]: I0319 09:27:25.863467 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396"} err="failed to get container status \"04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396\": rpc error: code = NotFound desc = could not find container \"04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396\": container with ID starting with 04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396 not found: ID does not exist" Mar 19 09:27:25.892231 master-0 kubenswrapper[7385]: I0319 09:27:25.892145 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.892120499 podStartE2EDuration="2.892120499s" podCreationTimestamp="2026-03-19 09:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:27:25.876030788 +0000 UTC m=+541.550460489" watchObservedRunningTime="2026-03-19 09:27:25.892120499 +0000 UTC m=+541.566550210" Mar 19 09:27:25.895815 master-0 kubenswrapper[7385]: I0319 09:27:25.895604 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 09:27:25.900117 master-0 kubenswrapper[7385]: I0319 09:27:25.900067 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 19 09:27:26.537393 master-0 kubenswrapper[7385]: I0319 09:27:26.537346 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c31ae14-dacc-459d-ba1b-010648e9976b" path="/var/lib/kubelet/pods/6c31ae14-dacc-459d-ba1b-010648e9976b/volumes" Mar 19 09:27:29.300599 master-0 kubenswrapper[7385]: I0319 09:27:29.300511 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 19 09:27:29.301462 master-0 kubenswrapper[7385]: E0319 09:27:29.301099 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c31ae14-dacc-459d-ba1b-010648e9976b" containerName="installer" Mar 19 09:27:29.301462 master-0 kubenswrapper[7385]: I0319 09:27:29.301115 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c31ae14-dacc-459d-ba1b-010648e9976b" containerName="installer" Mar 19 09:27:29.301462 master-0 kubenswrapper[7385]: E0319 09:27:29.301137 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerName="installer" Mar 19 09:27:29.301462 master-0 kubenswrapper[7385]: I0319 09:27:29.301143 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerName="installer" Mar 19 09:27:29.301462 master-0 kubenswrapper[7385]: I0319 09:27:29.301255 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c31ae14-dacc-459d-ba1b-010648e9976b" containerName="installer" Mar 19 09:27:29.301462 master-0 kubenswrapper[7385]: I0319 09:27:29.301269 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerName="installer" Mar 19 09:27:29.301806 master-0 kubenswrapper[7385]: I0319 09:27:29.301673 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.397212 master-0 kubenswrapper[7385]: I0319 09:27:29.397157 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.397418 master-0 kubenswrapper[7385]: I0319 09:27:29.397237 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-var-lock\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.397418 master-0 kubenswrapper[7385]: I0319 09:27:29.397309 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.482627 master-0 kubenswrapper[7385]: I0319 09:27:29.482571 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 19 09:27:29.498443 master-0 kubenswrapper[7385]: I0319 09:27:29.498391 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-var-lock\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.498651 master-0 kubenswrapper[7385]: I0319 09:27:29.498479 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.498651 master-0 kubenswrapper[7385]: I0319 09:27:29.498515 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.498651 master-0 kubenswrapper[7385]: I0319 09:27:29.498590 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.498651 master-0 kubenswrapper[7385]: I0319 09:27:29.498627 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-var-lock\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.513440 master-0 kubenswrapper[7385]: I0319 09:27:29.513394 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:29.623256 master-0 kubenswrapper[7385]: I0319 09:27:29.623194 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:27:30.008055 master-0 kubenswrapper[7385]: I0319 09:27:30.007951 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 19 09:27:30.917393 master-0 kubenswrapper[7385]: I0319 09:27:30.917325 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"fe4e3a0b-973b-4534-b91c-1e870e4e5c32","Type":"ContainerStarted","Data":"4772110931eb3a91b47fd2a5b7d728bb53faceca1654dd37bae708926fff76ac"} Mar 19 09:27:30.917393 master-0 kubenswrapper[7385]: I0319 09:27:30.917371 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"fe4e3a0b-973b-4534-b91c-1e870e4e5c32","Type":"ContainerStarted","Data":"a3e68a93a5e0eb978126226b3b3f9b90c706e1a1f588f63ea47aa67b19c47bdf"} Mar 19 09:27:30.939089 master-0 kubenswrapper[7385]: I0319 09:27:30.939006 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=1.938986262 podStartE2EDuration="1.938986262s" podCreationTimestamp="2026-03-19 09:27:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:27:30.937061741 +0000 UTC m=+546.611491482" watchObservedRunningTime="2026-03-19 09:27:30.938986262 +0000 UTC m=+546.613415973" Mar 19 09:27:31.086466 master-0 kubenswrapper[7385]: E0319 09:27:31.086402 7385 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod6c31ae14_dacc_459d_ba1b_010648e9976b.slice/crio-conmon-04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod3de6f41a_64ab_440c_b8e4_0e947045be07.slice/crio-f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod6c31ae14_dacc_459d_ba1b_010648e9976b.slice/crio-04cc1af90dec747eac3fb94d1a57761f0bda3be74a44f90d958966dab8cd8396.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod925be58b_a4e2_448b_afb4_4b4d689ae64c.slice/crio-2b815bb5a4f3237642901cf478d08543a7c45d3f20aa5aa587a69d0647d632b8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod925be58b_a4e2_448b_afb4_4b4d689ae64c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod3de6f41a_64ab_440c_b8e4_0e947045be07.slice/crio-conmon-f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pod6c31ae14_dacc_459d_ba1b_010648e9976b.slice\": RecentStats: unable to find data in memory cache]" Mar 19 09:27:31.268619 master-0 kubenswrapper[7385]: I0319 09:27:31.268593 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_3de6f41a-64ab-440c-b8e4-0e947045be07/installer/0.log" Mar 19 09:27:31.268839 master-0 kubenswrapper[7385]: I0319 09:27:31.268825 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:27:31.429692 master-0 kubenswrapper[7385]: I0319 09:27:31.429527 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3de6f41a-64ab-440c-b8e4-0e947045be07-kube-api-access\") pod \"3de6f41a-64ab-440c-b8e4-0e947045be07\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " Mar 19 09:27:31.430525 master-0 kubenswrapper[7385]: I0319 09:27:31.430401 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-kubelet-dir\") pod \"3de6f41a-64ab-440c-b8e4-0e947045be07\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " Mar 19 09:27:31.431064 master-0 kubenswrapper[7385]: I0319 09:27:31.430521 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3de6f41a-64ab-440c-b8e4-0e947045be07" (UID: "3de6f41a-64ab-440c-b8e4-0e947045be07"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:31.431937 master-0 kubenswrapper[7385]: I0319 09:27:31.431852 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-var-lock\") pod \"3de6f41a-64ab-440c-b8e4-0e947045be07\" (UID: \"3de6f41a-64ab-440c-b8e4-0e947045be07\") " Mar 19 09:27:31.432264 master-0 kubenswrapper[7385]: I0319 09:27:31.431944 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-var-lock" (OuterVolumeSpecName: "var-lock") pod "3de6f41a-64ab-440c-b8e4-0e947045be07" (UID: "3de6f41a-64ab-440c-b8e4-0e947045be07"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:31.432756 master-0 kubenswrapper[7385]: I0319 09:27:31.432718 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:31.433062 master-0 kubenswrapper[7385]: I0319 09:27:31.433035 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3de6f41a-64ab-440c-b8e4-0e947045be07-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:31.434890 master-0 kubenswrapper[7385]: I0319 09:27:31.434796 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de6f41a-64ab-440c-b8e4-0e947045be07-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3de6f41a-64ab-440c-b8e4-0e947045be07" (UID: "3de6f41a-64ab-440c-b8e4-0e947045be07"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:31.534471 master-0 kubenswrapper[7385]: I0319 09:27:31.534288 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3de6f41a-64ab-440c-b8e4-0e947045be07-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:31.925927 master-0 kubenswrapper[7385]: I0319 09:27:31.925833 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_3de6f41a-64ab-440c-b8e4-0e947045be07/installer/0.log" Mar 19 09:27:31.925927 master-0 kubenswrapper[7385]: I0319 09:27:31.925899 7385 generic.go:334] "Generic (PLEG): container finished" podID="3de6f41a-64ab-440c-b8e4-0e947045be07" containerID="f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc" exitCode=1 Mar 19 09:27:31.926921 master-0 kubenswrapper[7385]: I0319 09:27:31.926011 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 19 09:27:31.926921 master-0 kubenswrapper[7385]: I0319 09:27:31.926005 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3de6f41a-64ab-440c-b8e4-0e947045be07","Type":"ContainerDied","Data":"f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc"} Mar 19 09:27:31.926921 master-0 kubenswrapper[7385]: I0319 09:27:31.926179 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"3de6f41a-64ab-440c-b8e4-0e947045be07","Type":"ContainerDied","Data":"f44617f2c8372d662c88e0e11139aa9c411605b5f6563263f4e85fcbc2405f87"} Mar 19 09:27:31.926921 master-0 kubenswrapper[7385]: I0319 09:27:31.926217 7385 scope.go:117] "RemoveContainer" containerID="f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc" Mar 19 09:27:31.945816 master-0 kubenswrapper[7385]: I0319 09:27:31.945765 7385 scope.go:117] "RemoveContainer" containerID="f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc" Mar 19 09:27:31.946196 master-0 kubenswrapper[7385]: E0319 09:27:31.946151 7385 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc\": container with ID starting with f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc not found: ID does not exist" containerID="f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc" Mar 19 09:27:31.946287 master-0 kubenswrapper[7385]: I0319 09:27:31.946187 7385 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc"} err="failed to get container status \"f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc\": rpc error: code = NotFound desc = could not find container \"f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc\": container with ID starting with f0de9b46060861d123fc3eabd3bd4c7e565147247ae8e05c7dc24789b2ff0fbc not found: ID does not exist" Mar 19 09:27:31.977683 master-0 kubenswrapper[7385]: I0319 09:27:31.972995 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 09:27:31.989667 master-0 kubenswrapper[7385]: I0319 09:27:31.989604 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 19 09:27:32.543572 master-0 kubenswrapper[7385]: I0319 09:27:32.543466 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de6f41a-64ab-440c-b8e4-0e947045be07" path="/var/lib/kubelet/pods/3de6f41a-64ab-440c-b8e4-0e947045be07/volumes" Mar 19 09:27:33.991226 master-0 kubenswrapper[7385]: I0319 09:27:33.991189 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:33.991775 master-0 kubenswrapper[7385]: I0319 09:27:33.991761 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:33.991842 master-0 kubenswrapper[7385]: I0319 09:27:33.991833 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:33.991899 master-0 kubenswrapper[7385]: I0319 09:27:33.991890 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:33.996609 master-0 kubenswrapper[7385]: I0319 09:27:33.996593 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:33.998889 master-0 kubenswrapper[7385]: I0319 09:27:33.998850 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:34.955951 master-0 kubenswrapper[7385]: I0319 09:27:34.955888 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:34.957325 master-0 kubenswrapper[7385]: I0319 09:27:34.957256 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:27:39.050735 master-0 kubenswrapper[7385]: I0319 09:27:39.050666 7385 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 09:27:39.051283 master-0 kubenswrapper[7385]: I0319 09:27:39.050915 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://64562b405a3862b5592fdd93f8c95623b24024a5e23281d2b69f8ff3942c63c6" gracePeriod=30 Mar 19 09:27:39.052496 master-0 kubenswrapper[7385]: I0319 09:27:39.052451 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 09:27:39.052866 master-0 kubenswrapper[7385]: E0319 09:27:39.052835 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de6f41a-64ab-440c-b8e4-0e947045be07" containerName="installer" Mar 19 09:27:39.052866 master-0 kubenswrapper[7385]: I0319 09:27:39.052857 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de6f41a-64ab-440c-b8e4-0e947045be07" containerName="installer" Mar 19 09:27:39.052945 master-0 kubenswrapper[7385]: E0319 09:27:39.052870 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 09:27:39.052945 master-0 kubenswrapper[7385]: I0319 09:27:39.052879 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 09:27:39.053162 master-0 kubenswrapper[7385]: I0319 09:27:39.053096 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 09:27:39.053162 master-0 kubenswrapper[7385]: I0319 09:27:39.053118 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de6f41a-64ab-440c-b8e4-0e947045be07" containerName="installer" Mar 19 09:27:39.053291 master-0 kubenswrapper[7385]: E0319 09:27:39.053268 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 09:27:39.053291 master-0 kubenswrapper[7385]: I0319 09:27:39.053284 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 09:27:39.053451 master-0 kubenswrapper[7385]: I0319 09:27:39.053431 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 19 09:27:39.054687 master-0 kubenswrapper[7385]: I0319 09:27:39.054650 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.070393 master-0 kubenswrapper[7385]: I0319 09:27:39.070349 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.070679 master-0 kubenswrapper[7385]: I0319 09:27:39.070526 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.160826 master-0 kubenswrapper[7385]: I0319 09:27:39.160761 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 09:27:39.171614 master-0 kubenswrapper[7385]: I0319 09:27:39.171518 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.171614 master-0 kubenswrapper[7385]: I0319 09:27:39.171619 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.171845 master-0 kubenswrapper[7385]: I0319 09:27:39.171653 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.171920 master-0 kubenswrapper[7385]: I0319 09:27:39.171868 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.315052 master-0 kubenswrapper[7385]: I0319 09:27:39.314932 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:27:39.350172 master-0 kubenswrapper[7385]: I0319 09:27:39.350102 7385 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="96e69531-6720-4393-bf13-a5c637a4354c" Mar 19 09:27:39.456261 master-0 kubenswrapper[7385]: I0319 09:27:39.456177 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:39.472409 master-0 kubenswrapper[7385]: W0319 09:27:39.472363 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8413125cf444e5c95f023c5dd9c6151e.slice/crio-a67637c3ce9588f542e20565aba89d6f1d4976553a42d7b1a45d6451a0663219 WatchSource:0}: Error finding container a67637c3ce9588f542e20565aba89d6f1d4976553a42d7b1a45d6451a0663219: Status 404 returned error can't find the container with id a67637c3ce9588f542e20565aba89d6f1d4976553a42d7b1a45d6451a0663219 Mar 19 09:27:39.475170 master-0 kubenswrapper[7385]: I0319 09:27:39.475112 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 19 09:27:39.475226 master-0 kubenswrapper[7385]: I0319 09:27:39.475202 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:39.475266 master-0 kubenswrapper[7385]: I0319 09:27:39.475222 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 19 09:27:39.475385 master-0 kubenswrapper[7385]: I0319 09:27:39.475339 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:39.475693 master-0 kubenswrapper[7385]: I0319 09:27:39.475667 7385 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:39.475693 master-0 kubenswrapper[7385]: I0319 09:27:39.475689 7385 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:39.984681 master-0 kubenswrapper[7385]: I0319 09:27:39.984556 7385 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="64562b405a3862b5592fdd93f8c95623b24024a5e23281d2b69f8ff3942c63c6" exitCode=0 Mar 19 09:27:39.984681 master-0 kubenswrapper[7385]: I0319 09:27:39.984633 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28589fe861615165ce142a9537680f47cc85afea3410570cd76b4c912716d3e6" Mar 19 09:27:39.984681 master-0 kubenswrapper[7385]: I0319 09:27:39.984639 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 19 09:27:39.985034 master-0 kubenswrapper[7385]: I0319 09:27:39.984652 7385 scope.go:117] "RemoveContainer" containerID="0fa64701f5e06185b54d04000e8eff35b5351d75655dd3a6eb6ffaa3f06a93bd" Mar 19 09:27:39.990082 master-0 kubenswrapper[7385]: I0319 09:27:39.990045 7385 generic.go:334] "Generic (PLEG): container finished" podID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerID="ef45d184d3bd520a4e4cf7302b2fbd38a0a7146b58fcce765edbe1eaa24e7615" exitCode=0 Mar 19 09:27:39.990188 master-0 kubenswrapper[7385]: I0319 09:27:39.990128 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"d04277ae-5881-4ce1-9157-d58f93a5f116","Type":"ContainerDied","Data":"ef45d184d3bd520a4e4cf7302b2fbd38a0a7146b58fcce765edbe1eaa24e7615"} Mar 19 09:27:39.993665 master-0 kubenswrapper[7385]: I0319 09:27:39.993205 7385 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="e6c6a6b2ffdb2a6ceaac069cb1bbfd1fd6ab268976108284249a62d330f8ad4e" exitCode=0 Mar 19 09:27:39.993665 master-0 kubenswrapper[7385]: I0319 09:27:39.993256 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"e6c6a6b2ffdb2a6ceaac069cb1bbfd1fd6ab268976108284249a62d330f8ad4e"} Mar 19 09:27:39.993665 master-0 kubenswrapper[7385]: I0319 09:27:39.993328 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"a67637c3ce9588f542e20565aba89d6f1d4976553a42d7b1a45d6451a0663219"} Mar 19 09:27:40.537445 master-0 kubenswrapper[7385]: I0319 09:27:40.537337 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 19 09:27:40.538001 master-0 kubenswrapper[7385]: I0319 09:27:40.537701 7385 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 19 09:27:40.549721 master-0 kubenswrapper[7385]: I0319 09:27:40.549669 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 09:27:40.549721 master-0 kubenswrapper[7385]: I0319 09:27:40.549717 7385 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="96e69531-6720-4393-bf13-a5c637a4354c" Mar 19 09:27:40.551912 master-0 kubenswrapper[7385]: I0319 09:27:40.551849 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 19 09:27:40.551912 master-0 kubenswrapper[7385]: I0319 09:27:40.551900 7385 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="96e69531-6720-4393-bf13-a5c637a4354c" Mar 19 09:27:41.010787 master-0 kubenswrapper[7385]: I0319 09:27:41.010741 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"95ac7f362ef5d31be76e509ce342250794db8fc83ad49a811e1f5659d7238a79"} Mar 19 09:27:41.010787 master-0 kubenswrapper[7385]: I0319 09:27:41.010782 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"48d42851ba5e1a1222e1f2eb24f68210235c910ac77423fe9def29b71929e2f4"} Mar 19 09:27:41.010787 master-0 kubenswrapper[7385]: I0319 09:27:41.010795 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"7414ff0d187efeb091c598330787485add0219e366d0b09f7b817dd18949f28f"} Mar 19 09:27:41.011114 master-0 kubenswrapper[7385]: I0319 09:27:41.010969 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:27:41.029683 master-0 kubenswrapper[7385]: I0319 09:27:41.029578 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.029558022 podStartE2EDuration="2.029558022s" podCreationTimestamp="2026-03-19 09:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:27:41.029221013 +0000 UTC m=+556.703650754" watchObservedRunningTime="2026-03-19 09:27:41.029558022 +0000 UTC m=+556.703987723" Mar 19 09:27:41.308591 master-0 kubenswrapper[7385]: I0319 09:27:41.308560 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:41.414022 master-0 kubenswrapper[7385]: I0319 09:27:41.413980 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d04277ae-5881-4ce1-9157-d58f93a5f116-kube-api-access\") pod \"d04277ae-5881-4ce1-9157-d58f93a5f116\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " Mar 19 09:27:41.414263 master-0 kubenswrapper[7385]: I0319 09:27:41.414249 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-kubelet-dir\") pod \"d04277ae-5881-4ce1-9157-d58f93a5f116\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " Mar 19 09:27:41.414327 master-0 kubenswrapper[7385]: I0319 09:27:41.414304 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d04277ae-5881-4ce1-9157-d58f93a5f116" (UID: "d04277ae-5881-4ce1-9157-d58f93a5f116"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:41.414527 master-0 kubenswrapper[7385]: I0319 09:27:41.414514 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-var-lock\") pod \"d04277ae-5881-4ce1-9157-d58f93a5f116\" (UID: \"d04277ae-5881-4ce1-9157-d58f93a5f116\") " Mar 19 09:27:41.414690 master-0 kubenswrapper[7385]: I0319 09:27:41.414532 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-var-lock" (OuterVolumeSpecName: "var-lock") pod "d04277ae-5881-4ce1-9157-d58f93a5f116" (UID: "d04277ae-5881-4ce1-9157-d58f93a5f116"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:27:41.415000 master-0 kubenswrapper[7385]: I0319 09:27:41.414984 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:41.415080 master-0 kubenswrapper[7385]: I0319 09:27:41.415067 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d04277ae-5881-4ce1-9157-d58f93a5f116-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:41.417215 master-0 kubenswrapper[7385]: I0319 09:27:41.417161 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04277ae-5881-4ce1-9157-d58f93a5f116-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d04277ae-5881-4ce1-9157-d58f93a5f116" (UID: "d04277ae-5881-4ce1-9157-d58f93a5f116"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:41.516168 master-0 kubenswrapper[7385]: I0319 09:27:41.516130 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d04277ae-5881-4ce1-9157-d58f93a5f116-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:42.020895 master-0 kubenswrapper[7385]: I0319 09:27:42.020830 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"d04277ae-5881-4ce1-9157-d58f93a5f116","Type":"ContainerDied","Data":"b6f9f00de289e7567b86de252b9a6b1c229d174535eede65fb885a8a83fa2393"} Mar 19 09:27:42.020895 master-0 kubenswrapper[7385]: I0319 09:27:42.020890 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f9f00de289e7567b86de252b9a6b1c229d174535eede65fb885a8a83fa2393" Mar 19 09:27:42.020895 master-0 kubenswrapper[7385]: I0319 09:27:42.020848 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:27:44.710725 master-0 kubenswrapper[7385]: I0319 09:27:44.710679 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj"] Mar 19 09:27:44.711243 master-0 kubenswrapper[7385]: I0319 09:27:44.710881 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" podUID="24e84d52-ae67-40d0-a2c5-39160b90fa0e" containerName="route-controller-manager" containerID="cri-o://65027fd5ee877d7eeb4f7d58b9d53307a5dbfb87a18aafd54a59cec2e61bf4d7" gracePeriod=30 Mar 19 09:27:44.721288 master-0 kubenswrapper[7385]: I0319 09:27:44.721244 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bb7458647-2hx6x"] Mar 19 09:27:44.721778 master-0 kubenswrapper[7385]: I0319 09:27:44.721750 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerName="controller-manager" containerID="cri-o://347a1d3bec5889e9ec93363cf938da9436a428160ad0dc8a308e691fb255063e" gracePeriod=30 Mar 19 09:27:45.038283 master-0 kubenswrapper[7385]: I0319 09:27:45.038068 7385 generic.go:334] "Generic (PLEG): container finished" podID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerID="347a1d3bec5889e9ec93363cf938da9436a428160ad0dc8a308e691fb255063e" exitCode=0 Mar 19 09:27:45.038283 master-0 kubenswrapper[7385]: I0319 09:27:45.038143 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" event={"ID":"c1f4f7b3-7f79-4618-b87a-400cadcb9813","Type":"ContainerDied","Data":"347a1d3bec5889e9ec93363cf938da9436a428160ad0dc8a308e691fb255063e"} Mar 19 09:27:45.038283 master-0 kubenswrapper[7385]: I0319 09:27:45.038184 7385 scope.go:117] "RemoveContainer" containerID="0efec46299b67a0eea4b13ca67058dc6945af55d88748d9fe42464dc879df463" Mar 19 09:27:45.040405 master-0 kubenswrapper[7385]: I0319 09:27:45.040354 7385 generic.go:334] "Generic (PLEG): container finished" podID="24e84d52-ae67-40d0-a2c5-39160b90fa0e" containerID="65027fd5ee877d7eeb4f7d58b9d53307a5dbfb87a18aafd54a59cec2e61bf4d7" exitCode=0 Mar 19 09:27:45.040405 master-0 kubenswrapper[7385]: I0319 09:27:45.040378 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" event={"ID":"24e84d52-ae67-40d0-a2c5-39160b90fa0e","Type":"ContainerDied","Data":"65027fd5ee877d7eeb4f7d58b9d53307a5dbfb87a18aafd54a59cec2e61bf4d7"} Mar 19 09:27:45.191953 master-0 kubenswrapper[7385]: I0319 09:27:45.191920 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:27:45.260016 master-0 kubenswrapper[7385]: I0319 09:27:45.259912 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnbq8\" (UniqueName: \"kubernetes.io/projected/24e84d52-ae67-40d0-a2c5-39160b90fa0e-kube-api-access-jnbq8\") pod \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " Mar 19 09:27:45.260248 master-0 kubenswrapper[7385]: I0319 09:27:45.260233 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e84d52-ae67-40d0-a2c5-39160b90fa0e-serving-cert\") pod \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " Mar 19 09:27:45.260369 master-0 kubenswrapper[7385]: I0319 09:27:45.260356 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-config\") pod \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " Mar 19 09:27:45.260514 master-0 kubenswrapper[7385]: I0319 09:27:45.260493 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-client-ca\") pod \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\" (UID: \"24e84d52-ae67-40d0-a2c5-39160b90fa0e\") " Mar 19 09:27:45.260859 master-0 kubenswrapper[7385]: I0319 09:27:45.260822 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-client-ca" (OuterVolumeSpecName: "client-ca") pod "24e84d52-ae67-40d0-a2c5-39160b90fa0e" (UID: "24e84d52-ae67-40d0-a2c5-39160b90fa0e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:27:45.260958 master-0 kubenswrapper[7385]: I0319 09:27:45.260909 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-config" (OuterVolumeSpecName: "config") pod "24e84d52-ae67-40d0-a2c5-39160b90fa0e" (UID: "24e84d52-ae67-40d0-a2c5-39160b90fa0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:27:45.262834 master-0 kubenswrapper[7385]: I0319 09:27:45.262797 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e84d52-ae67-40d0-a2c5-39160b90fa0e-kube-api-access-jnbq8" (OuterVolumeSpecName: "kube-api-access-jnbq8") pod "24e84d52-ae67-40d0-a2c5-39160b90fa0e" (UID: "24e84d52-ae67-40d0-a2c5-39160b90fa0e"). InnerVolumeSpecName "kube-api-access-jnbq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:45.266861 master-0 kubenswrapper[7385]: I0319 09:27:45.266819 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e84d52-ae67-40d0-a2c5-39160b90fa0e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "24e84d52-ae67-40d0-a2c5-39160b90fa0e" (UID: "24e84d52-ae67-40d0-a2c5-39160b90fa0e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:27:45.364096 master-0 kubenswrapper[7385]: I0319 09:27:45.364057 7385 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.364190 master-0 kubenswrapper[7385]: I0319 09:27:45.364103 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnbq8\" (UniqueName: \"kubernetes.io/projected/24e84d52-ae67-40d0-a2c5-39160b90fa0e-kube-api-access-jnbq8\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.364190 master-0 kubenswrapper[7385]: I0319 09:27:45.364118 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24e84d52-ae67-40d0-a2c5-39160b90fa0e-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.364190 master-0 kubenswrapper[7385]: I0319 09:27:45.364130 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24e84d52-ae67-40d0-a2c5-39160b90fa0e-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.612617 master-0 kubenswrapper[7385]: I0319 09:27:45.612532 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:27:45.768845 master-0 kubenswrapper[7385]: I0319 09:27:45.768795 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-config\") pod \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " Mar 19 09:27:45.769354 master-0 kubenswrapper[7385]: I0319 09:27:45.768878 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-client-ca\") pod \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " Mar 19 09:27:45.769354 master-0 kubenswrapper[7385]: I0319 09:27:45.769035 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhwqs\" (UniqueName: \"kubernetes.io/projected/c1f4f7b3-7f79-4618-b87a-400cadcb9813-kube-api-access-nhwqs\") pod \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " Mar 19 09:27:45.769354 master-0 kubenswrapper[7385]: I0319 09:27:45.769062 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-proxy-ca-bundles\") pod \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " Mar 19 09:27:45.769354 master-0 kubenswrapper[7385]: I0319 09:27:45.769091 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f4f7b3-7f79-4618-b87a-400cadcb9813-serving-cert\") pod \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\" (UID: \"c1f4f7b3-7f79-4618-b87a-400cadcb9813\") " Mar 19 09:27:45.769472 master-0 kubenswrapper[7385]: I0319 09:27:45.769437 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-config" (OuterVolumeSpecName: "config") pod "c1f4f7b3-7f79-4618-b87a-400cadcb9813" (UID: "c1f4f7b3-7f79-4618-b87a-400cadcb9813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:27:45.769674 master-0 kubenswrapper[7385]: I0319 09:27:45.769474 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-client-ca" (OuterVolumeSpecName: "client-ca") pod "c1f4f7b3-7f79-4618-b87a-400cadcb9813" (UID: "c1f4f7b3-7f79-4618-b87a-400cadcb9813"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:27:45.769804 master-0 kubenswrapper[7385]: I0319 09:27:45.769770 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c1f4f7b3-7f79-4618-b87a-400cadcb9813" (UID: "c1f4f7b3-7f79-4618-b87a-400cadcb9813"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:27:45.772143 master-0 kubenswrapper[7385]: I0319 09:27:45.772103 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f4f7b3-7f79-4618-b87a-400cadcb9813-kube-api-access-nhwqs" (OuterVolumeSpecName: "kube-api-access-nhwqs") pod "c1f4f7b3-7f79-4618-b87a-400cadcb9813" (UID: "c1f4f7b3-7f79-4618-b87a-400cadcb9813"). InnerVolumeSpecName "kube-api-access-nhwqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:27:45.772333 master-0 kubenswrapper[7385]: I0319 09:27:45.772309 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f4f7b3-7f79-4618-b87a-400cadcb9813-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c1f4f7b3-7f79-4618-b87a-400cadcb9813" (UID: "c1f4f7b3-7f79-4618-b87a-400cadcb9813"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:27:45.871745 master-0 kubenswrapper[7385]: I0319 09:27:45.871578 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.871745 master-0 kubenswrapper[7385]: I0319 09:27:45.871622 7385 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.871745 master-0 kubenswrapper[7385]: I0319 09:27:45.871635 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhwqs\" (UniqueName: \"kubernetes.io/projected/c1f4f7b3-7f79-4618-b87a-400cadcb9813-kube-api-access-nhwqs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.871745 master-0 kubenswrapper[7385]: I0319 09:27:45.871644 7385 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1f4f7b3-7f79-4618-b87a-400cadcb9813-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.871745 master-0 kubenswrapper[7385]: I0319 09:27:45.871654 7385 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1f4f7b3-7f79-4618-b87a-400cadcb9813-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:27:45.874112 master-0 kubenswrapper[7385]: I0319 09:27:45.873741 7385 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 19 09:27:45.874344 master-0 kubenswrapper[7385]: I0319 09:27:45.874124 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://cfcf72a5968a35b223ff650bf76501a556c4762493ff456643c088edb64e0ea9" gracePeriod=30 Mar 19 09:27:45.874344 master-0 kubenswrapper[7385]: I0319 09:27:45.874262 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://2aa5aa662ffa0437e2fa27777a57474f61a992c00f287dd244d781ce0481e24a" gracePeriod=30 Mar 19 09:27:45.874344 master-0 kubenswrapper[7385]: I0319 09:27:45.874308 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://577e1cb78b7983d3ec252dc0914c0a0c436d8757170116f9a3b932229b0de3fc" gracePeriod=30 Mar 19 09:27:45.874344 master-0 kubenswrapper[7385]: I0319 09:27:45.874339 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://bf41ec4f73d991e650705cd7dc50f09d5379b830fb106a5b2bf29cf8cf16aa01" gracePeriod=30 Mar 19 09:27:45.874567 master-0 kubenswrapper[7385]: I0319 09:27:45.874368 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://3ed225a36fa4421795f63a78a99d058f08eb76290885a7395566f826ec754799" gracePeriod=30 Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879255 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879587 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerName="controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879604 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerName="controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879618 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879626 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879637 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879645 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879661 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879668 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879689 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879697 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879709 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e84d52-ae67-40d0-a2c5-39160b90fa0e" containerName="route-controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879717 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e84d52-ae67-40d0-a2c5-39160b90fa0e" containerName="route-controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879725 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879732 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879746 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879753 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879765 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879773 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879782 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879789 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.879800 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerName="installer" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879807 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerName="installer" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879920 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerName="installer" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879934 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerName="controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879943 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879951 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879963 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879975 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879989 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.879999 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e84d52-ae67-40d0-a2c5-39160b90fa0e" containerName="route-controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: E0319 09:27:45.880113 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerName="controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.880121 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerName="controller-manager" Mar 19 09:27:45.888395 master-0 kubenswrapper[7385]: I0319 09:27:45.880224 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" containerName="controller-manager" Mar 19 09:27:46.047066 master-0 kubenswrapper[7385]: I0319 09:27:46.046929 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" event={"ID":"c1f4f7b3-7f79-4618-b87a-400cadcb9813","Type":"ContainerDied","Data":"dd6594e954d08ee67ec8680ad3fdfb434e914edac0a0cdab038657341b6e046d"} Mar 19 09:27:46.047066 master-0 kubenswrapper[7385]: I0319 09:27:46.046974 7385 scope.go:117] "RemoveContainer" containerID="347a1d3bec5889e9ec93363cf938da9436a428160ad0dc8a308e691fb255063e" Mar 19 09:27:46.047066 master-0 kubenswrapper[7385]: I0319 09:27:46.046942 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" Mar 19 09:27:46.049580 master-0 kubenswrapper[7385]: I0319 09:27:46.049535 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" event={"ID":"24e84d52-ae67-40d0-a2c5-39160b90fa0e","Type":"ContainerDied","Data":"1dd0b82916d571c08bf7c2cfa784425a6e307c8f39fb70ae48f26d80408e5899"} Mar 19 09:27:46.049749 master-0 kubenswrapper[7385]: I0319 09:27:46.049735 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj" Mar 19 09:27:46.060793 master-0 kubenswrapper[7385]: I0319 09:27:46.054431 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 09:27:46.060793 master-0 kubenswrapper[7385]: I0319 09:27:46.055363 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 09:27:46.060793 master-0 kubenswrapper[7385]: I0319 09:27:46.057035 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="2aa5aa662ffa0437e2fa27777a57474f61a992c00f287dd244d781ce0481e24a" exitCode=2 Mar 19 09:27:46.060793 master-0 kubenswrapper[7385]: I0319 09:27:46.057055 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="577e1cb78b7983d3ec252dc0914c0a0c436d8757170116f9a3b932229b0de3fc" exitCode=0 Mar 19 09:27:46.060793 master-0 kubenswrapper[7385]: I0319 09:27:46.057070 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="bf41ec4f73d991e650705cd7dc50f09d5379b830fb106a5b2bf29cf8cf16aa01" exitCode=2 Mar 19 09:27:46.073627 master-0 kubenswrapper[7385]: I0319 09:27:46.073573 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.073838 master-0 kubenswrapper[7385]: I0319 09:27:46.073639 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.073838 master-0 kubenswrapper[7385]: I0319 09:27:46.073777 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.073968 master-0 kubenswrapper[7385]: I0319 09:27:46.073935 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.074033 master-0 kubenswrapper[7385]: I0319 09:27:46.073999 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.074354 master-0 kubenswrapper[7385]: I0319 09:27:46.074315 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.077532 master-0 kubenswrapper[7385]: I0319 09:27:46.077494 7385 scope.go:117] "RemoveContainer" containerID="65027fd5ee877d7eeb4f7d58b9d53307a5dbfb87a18aafd54a59cec2e61bf4d7" Mar 19 09:27:46.176006 master-0 kubenswrapper[7385]: I0319 09:27:46.175911 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176043 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176054 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176099 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176126 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176176 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176201 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176255 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176322 master-0 kubenswrapper[7385]: I0319 09:27:46.176291 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176622 master-0 kubenswrapper[7385]: I0319 09:27:46.176398 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176622 master-0 kubenswrapper[7385]: I0319 09:27:46.176436 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:46.176622 master-0 kubenswrapper[7385]: I0319 09:27:46.176500 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:27:49.081439 master-0 kubenswrapper[7385]: I0319 09:27:49.081389 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/2.log" Mar 19 09:27:49.082210 master-0 kubenswrapper[7385]: I0319 09:27:49.082178 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/1.log" Mar 19 09:27:49.082645 master-0 kubenswrapper[7385]: I0319 09:27:49.082608 7385 generic.go:334] "Generic (PLEG): container finished" podID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" containerID="0a826efef4d4285208df9ac62804747687dd3c66bd7c0716a36851e3ff4bbfd4" exitCode=1 Mar 19 09:27:49.082721 master-0 kubenswrapper[7385]: I0319 09:27:49.082646 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerDied","Data":"0a826efef4d4285208df9ac62804747687dd3c66bd7c0716a36851e3ff4bbfd4"} Mar 19 09:27:49.082721 master-0 kubenswrapper[7385]: I0319 09:27:49.082680 7385 scope.go:117] "RemoveContainer" containerID="88c0357aa022be581857f74b9852a9a65a3b84fe610ed6b9bc79f94f9ef05744" Mar 19 09:27:49.083239 master-0 kubenswrapper[7385]: I0319 09:27:49.083210 7385 scope.go:117] "RemoveContainer" containerID="0a826efef4d4285208df9ac62804747687dd3c66bd7c0716a36851e3ff4bbfd4" Mar 19 09:27:49.083508 master-0 kubenswrapper[7385]: E0319 09:27:49.083476 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:27:50.088888 master-0 kubenswrapper[7385]: I0319 09:27:50.088814 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/2.log" Mar 19 09:27:52.108197 master-0 kubenswrapper[7385]: I0319 09:27:52.108126 7385 generic.go:334] "Generic (PLEG): container finished" podID="57227a66-c758-4a46-a5e1-f603baa3f570" containerID="3a5dd314e61c7e5e336d52053d0330f63d21f00e76686c7b0a177fb71dc220dc" exitCode=0 Mar 19 09:27:52.108866 master-0 kubenswrapper[7385]: I0319 09:27:52.108191 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerDied","Data":"3a5dd314e61c7e5e336d52053d0330f63d21f00e76686c7b0a177fb71dc220dc"} Mar 19 09:27:52.108866 master-0 kubenswrapper[7385]: I0319 09:27:52.108264 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerStarted","Data":"2fae7b44934deb2f61dfa30059ff2a9d4e27ce928263e021c35df2bf0416f39e"} Mar 19 09:27:52.108866 master-0 kubenswrapper[7385]: I0319 09:27:52.108296 7385 scope.go:117] "RemoveContainer" containerID="5cb6c10ede1632045f4c6b7b809db52b73fe2590e0eca9bb5097244794291556" Mar 19 09:27:52.728418 master-0 kubenswrapper[7385]: I0319 09:27:52.728324 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:27:52.731649 master-0 kubenswrapper[7385]: I0319 09:27:52.731578 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:52.731649 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:52.731649 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:52.731649 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:52.732052 master-0 kubenswrapper[7385]: I0319 09:27:52.731661 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:53.730440 master-0 kubenswrapper[7385]: I0319 09:27:53.730389 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:53.730440 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:53.730440 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:53.730440 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:53.730965 master-0 kubenswrapper[7385]: I0319 09:27:53.730450 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:54.730123 master-0 kubenswrapper[7385]: I0319 09:27:54.730047 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:54.730123 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:54.730123 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:54.730123 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:54.730406 master-0 kubenswrapper[7385]: I0319 09:27:54.730149 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:55.731079 master-0 kubenswrapper[7385]: I0319 09:27:55.731010 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:55.731079 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:55.731079 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:55.731079 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:55.731726 master-0 kubenswrapper[7385]: I0319 09:27:55.731113 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:56.730208 master-0 kubenswrapper[7385]: I0319 09:27:56.730143 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:56.730208 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:56.730208 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:56.730208 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:56.730482 master-0 kubenswrapper[7385]: I0319 09:27:56.730215 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:57.627882 master-0 kubenswrapper[7385]: E0319 09:27:57.626526 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:27:57.731857 master-0 kubenswrapper[7385]: I0319 09:27:57.731808 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:57.731857 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:57.731857 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:57.731857 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:57.732111 master-0 kubenswrapper[7385]: I0319 09:27:57.731866 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:58.730860 master-0 kubenswrapper[7385]: I0319 09:27:58.730789 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:58.730860 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:58.730860 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:58.730860 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:58.731446 master-0 kubenswrapper[7385]: I0319 09:27:58.730865 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:27:59.158735 master-0 kubenswrapper[7385]: I0319 09:27:59.158688 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:27:59.158924 master-0 kubenswrapper[7385]: I0319 09:27:59.158738 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="57d853d1ec8afcb012b4bb8c0bf03fdeac8c6cbef5eb24aa2fea3d5801611fb9" exitCode=1 Mar 19 09:27:59.158924 master-0 kubenswrapper[7385]: I0319 09:27:59.158765 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerDied","Data":"57d853d1ec8afcb012b4bb8c0bf03fdeac8c6cbef5eb24aa2fea3d5801611fb9"} Mar 19 09:27:59.159176 master-0 kubenswrapper[7385]: I0319 09:27:59.159154 7385 scope.go:117] "RemoveContainer" containerID="57d853d1ec8afcb012b4bb8c0bf03fdeac8c6cbef5eb24aa2fea3d5801611fb9" Mar 19 09:27:59.728690 master-0 kubenswrapper[7385]: I0319 09:27:59.728645 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:27:59.730047 master-0 kubenswrapper[7385]: I0319 09:27:59.730009 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:27:59.730047 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:27:59.730047 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:27:59.730047 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:27:59.730197 master-0 kubenswrapper[7385]: I0319 09:27:59.730062 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:00.168964 master-0 kubenswrapper[7385]: I0319 09:28:00.168913 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:28:00.169507 master-0 kubenswrapper[7385]: I0319 09:28:00.169026 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"33fbab3dae4d95c59279d28953be3dee55bacb9a970231a9a8855ae0fd8f5ddd"} Mar 19 09:28:00.171157 master-0 kubenswrapper[7385]: I0319 09:28:00.171117 7385 generic.go:334] "Generic (PLEG): container finished" podID="e5780efa-c56a-4953-807f-6a51efc91b09" containerID="bf5e3834612c0d4b8b32cfe23c6154f92dbaf9ab5151f44cf79b7b61c3d85739" exitCode=0 Mar 19 09:28:00.171250 master-0 kubenswrapper[7385]: I0319 09:28:00.171166 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e5780efa-c56a-4953-807f-6a51efc91b09","Type":"ContainerDied","Data":"bf5e3834612c0d4b8b32cfe23c6154f92dbaf9ab5151f44cf79b7b61c3d85739"} Mar 19 09:28:00.731133 master-0 kubenswrapper[7385]: I0319 09:28:00.731036 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:00.731133 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:00.731133 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:00.731133 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:00.731133 master-0 kubenswrapper[7385]: I0319 09:28:00.731119 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:01.444855 master-0 kubenswrapper[7385]: I0319 09:28:01.444769 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 09:28:01.530562 master-0 kubenswrapper[7385]: I0319 09:28:01.530513 7385 scope.go:117] "RemoveContainer" containerID="0a826efef4d4285208df9ac62804747687dd3c66bd7c0716a36851e3ff4bbfd4" Mar 19 09:28:01.530784 master-0 kubenswrapper[7385]: E0319 09:28:01.530759 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:28:01.575129 master-0 kubenswrapper[7385]: I0319 09:28:01.575074 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5780efa-c56a-4953-807f-6a51efc91b09-kube-api-access\") pod \"e5780efa-c56a-4953-807f-6a51efc91b09\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " Mar 19 09:28:01.575320 master-0 kubenswrapper[7385]: I0319 09:28:01.575152 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-kubelet-dir\") pod \"e5780efa-c56a-4953-807f-6a51efc91b09\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " Mar 19 09:28:01.575320 master-0 kubenswrapper[7385]: I0319 09:28:01.575244 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e5780efa-c56a-4953-807f-6a51efc91b09" (UID: "e5780efa-c56a-4953-807f-6a51efc91b09"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:01.575501 master-0 kubenswrapper[7385]: I0319 09:28:01.575476 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-var-lock\") pod \"e5780efa-c56a-4953-807f-6a51efc91b09\" (UID: \"e5780efa-c56a-4953-807f-6a51efc91b09\") " Mar 19 09:28:01.575672 master-0 kubenswrapper[7385]: I0319 09:28:01.575653 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-var-lock" (OuterVolumeSpecName: "var-lock") pod "e5780efa-c56a-4953-807f-6a51efc91b09" (UID: "e5780efa-c56a-4953-807f-6a51efc91b09"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:01.576503 master-0 kubenswrapper[7385]: I0319 09:28:01.576478 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:01.576503 master-0 kubenswrapper[7385]: I0319 09:28:01.576502 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e5780efa-c56a-4953-807f-6a51efc91b09-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:01.577725 master-0 kubenswrapper[7385]: I0319 09:28:01.577693 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5780efa-c56a-4953-807f-6a51efc91b09-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e5780efa-c56a-4953-807f-6a51efc91b09" (UID: "e5780efa-c56a-4953-807f-6a51efc91b09"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:28:01.677103 master-0 kubenswrapper[7385]: I0319 09:28:01.677057 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e5780efa-c56a-4953-807f-6a51efc91b09-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:01.730377 master-0 kubenswrapper[7385]: I0319 09:28:01.730286 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:01.730377 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:01.730377 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:01.730377 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:01.730758 master-0 kubenswrapper[7385]: I0319 09:28:01.730734 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:02.184569 master-0 kubenswrapper[7385]: I0319 09:28:02.184492 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"e5780efa-c56a-4953-807f-6a51efc91b09","Type":"ContainerDied","Data":"41308b73b0cd59e2ebd3a9e2ccbd13c59e32ef712883338ba6a663fd6955d3dc"} Mar 19 09:28:02.184569 master-0 kubenswrapper[7385]: I0319 09:28:02.184534 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41308b73b0cd59e2ebd3a9e2ccbd13c59e32ef712883338ba6a663fd6955d3dc" Mar 19 09:28:02.185008 master-0 kubenswrapper[7385]: I0319 09:28:02.184951 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 09:28:02.731160 master-0 kubenswrapper[7385]: I0319 09:28:02.731020 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:02.731160 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:02.731160 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:02.731160 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:02.731160 master-0 kubenswrapper[7385]: I0319 09:28:02.731103 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:03.730913 master-0 kubenswrapper[7385]: I0319 09:28:03.730840 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:03.730913 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:03.730913 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:03.730913 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:03.731217 master-0 kubenswrapper[7385]: I0319 09:28:03.731015 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:03.991104 master-0 kubenswrapper[7385]: I0319 09:28:03.990977 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:28:03.991104 master-0 kubenswrapper[7385]: I0319 09:28:03.991036 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:28:03.996031 master-0 kubenswrapper[7385]: I0319 09:28:03.995999 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:28:04.731043 master-0 kubenswrapper[7385]: I0319 09:28:04.730966 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:04.731043 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:04.731043 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:04.731043 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:04.732186 master-0 kubenswrapper[7385]: I0319 09:28:04.731055 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:05.730618 master-0 kubenswrapper[7385]: I0319 09:28:05.730521 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:05.730618 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:05.730618 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:05.730618 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:05.730618 master-0 kubenswrapper[7385]: I0319 09:28:05.730599 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:06.732103 master-0 kubenswrapper[7385]: I0319 09:28:06.732004 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:06.732103 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:06.732103 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:06.732103 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:06.732103 master-0 kubenswrapper[7385]: I0319 09:28:06.732073 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:07.626862 master-0 kubenswrapper[7385]: E0319 09:28:07.626775 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:28:07.731111 master-0 kubenswrapper[7385]: I0319 09:28:07.731026 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:07.731111 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:07.731111 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:07.731111 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:07.731111 master-0 kubenswrapper[7385]: I0319 09:28:07.731093 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:08.731239 master-0 kubenswrapper[7385]: I0319 09:28:08.731168 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:08.731239 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:08.731239 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:08.731239 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:08.731784 master-0 kubenswrapper[7385]: I0319 09:28:08.731259 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:09.731167 master-0 kubenswrapper[7385]: I0319 09:28:09.731083 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:09.731167 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:09.731167 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:09.731167 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:09.732147 master-0 kubenswrapper[7385]: I0319 09:28:09.731195 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:10.730357 master-0 kubenswrapper[7385]: I0319 09:28:10.730302 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:10.730357 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:10.730357 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:10.730357 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:10.730627 master-0 kubenswrapper[7385]: I0319 09:28:10.730372 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:11.910733 master-0 kubenswrapper[7385]: I0319 09:28:11.910677 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:11.910733 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:11.910733 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:11.910733 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:11.911263 master-0 kubenswrapper[7385]: I0319 09:28:11.910749 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:12.529893 master-0 kubenswrapper[7385]: I0319 09:28:12.529842 7385 scope.go:117] "RemoveContainer" containerID="0a826efef4d4285208df9ac62804747687dd3c66bd7c0716a36851e3ff4bbfd4" Mar 19 09:28:12.730832 master-0 kubenswrapper[7385]: I0319 09:28:12.730773 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:12.730832 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:12.730832 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:12.730832 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:12.731127 master-0 kubenswrapper[7385]: I0319 09:28:12.730840 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:13.249647 master-0 kubenswrapper[7385]: I0319 09:28:13.249611 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/2.log" Mar 19 09:28:13.250110 master-0 kubenswrapper[7385]: I0319 09:28:13.249919 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982"} Mar 19 09:28:13.731121 master-0 kubenswrapper[7385]: I0319 09:28:13.731056 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:13.731121 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:13.731121 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:13.731121 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:13.731438 master-0 kubenswrapper[7385]: I0319 09:28:13.731198 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:13.995763 master-0 kubenswrapper[7385]: I0319 09:28:13.995654 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:28:14.730805 master-0 kubenswrapper[7385]: I0319 09:28:14.730750 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:14.730805 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:14.730805 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:14.730805 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:14.731444 master-0 kubenswrapper[7385]: I0319 09:28:14.730825 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:15.731471 master-0 kubenswrapper[7385]: I0319 09:28:15.731392 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:15.731471 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:15.731471 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:15.731471 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:15.732435 master-0 kubenswrapper[7385]: I0319 09:28:15.731479 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:16.274183 master-0 kubenswrapper[7385]: I0319 09:28:16.274130 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 09:28:16.275232 master-0 kubenswrapper[7385]: I0319 09:28:16.275197 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 09:28:16.275832 master-0 kubenswrapper[7385]: I0319 09:28:16.275791 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 19 09:28:16.276271 master-0 kubenswrapper[7385]: I0319 09:28:16.276242 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 19 09:28:16.277685 master-0 kubenswrapper[7385]: I0319 09:28:16.277613 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="3ed225a36fa4421795f63a78a99d058f08eb76290885a7395566f826ec754799" exitCode=137 Mar 19 09:28:16.277685 master-0 kubenswrapper[7385]: I0319 09:28:16.277675 7385 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="cfcf72a5968a35b223ff650bf76501a556c4762493ff456643c088edb64e0ea9" exitCode=137 Mar 19 09:28:16.446768 master-0 kubenswrapper[7385]: I0319 09:28:16.446728 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 09:28:16.448228 master-0 kubenswrapper[7385]: I0319 09:28:16.448190 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 09:28:16.449424 master-0 kubenswrapper[7385]: I0319 09:28:16.449371 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 19 09:28:16.449922 master-0 kubenswrapper[7385]: I0319 09:28:16.449878 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 19 09:28:16.451101 master-0 kubenswrapper[7385]: I0319 09:28:16.451066 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 09:28:16.580229 master-0 kubenswrapper[7385]: I0319 09:28:16.580090 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 09:28:16.580229 master-0 kubenswrapper[7385]: I0319 09:28:16.580133 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 09:28:16.580229 master-0 kubenswrapper[7385]: I0319 09:28:16.580198 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 09:28:16.580229 master-0 kubenswrapper[7385]: I0319 09:28:16.580235 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580288 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580308 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580310 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580352 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580310 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580392 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580437 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:16.580823 master-0 kubenswrapper[7385]: I0319 09:28:16.580743 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:16.581330 master-0 kubenswrapper[7385]: I0319 09:28:16.580851 7385 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:16.581330 master-0 kubenswrapper[7385]: I0319 09:28:16.580882 7385 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:16.581330 master-0 kubenswrapper[7385]: I0319 09:28:16.580902 7385 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:16.581330 master-0 kubenswrapper[7385]: I0319 09:28:16.580924 7385 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:16.581330 master-0 kubenswrapper[7385]: I0319 09:28:16.580940 7385 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:16.682214 master-0 kubenswrapper[7385]: I0319 09:28:16.682118 7385 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:16.732321 master-0 kubenswrapper[7385]: I0319 09:28:16.731759 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:16.732321 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:16.732321 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:16.732321 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:16.732321 master-0 kubenswrapper[7385]: I0319 09:28:16.731911 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:17.286104 master-0 kubenswrapper[7385]: I0319 09:28:17.285985 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 19 09:28:17.287378 master-0 kubenswrapper[7385]: I0319 09:28:17.287329 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 19 09:28:17.288264 master-0 kubenswrapper[7385]: I0319 09:28:17.288213 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 19 09:28:17.288735 master-0 kubenswrapper[7385]: I0319 09:28:17.288699 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 19 09:28:17.289912 master-0 kubenswrapper[7385]: I0319 09:28:17.289870 7385 scope.go:117] "RemoveContainer" containerID="2aa5aa662ffa0437e2fa27777a57474f61a992c00f287dd244d781ce0481e24a" Mar 19 09:28:17.290095 master-0 kubenswrapper[7385]: I0319 09:28:17.290048 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 09:28:17.313980 master-0 kubenswrapper[7385]: I0319 09:28:17.313873 7385 scope.go:117] "RemoveContainer" containerID="577e1cb78b7983d3ec252dc0914c0a0c436d8757170116f9a3b932229b0de3fc" Mar 19 09:28:17.336949 master-0 kubenswrapper[7385]: I0319 09:28:17.336909 7385 scope.go:117] "RemoveContainer" containerID="bf41ec4f73d991e650705cd7dc50f09d5379b830fb106a5b2bf29cf8cf16aa01" Mar 19 09:28:17.348800 master-0 kubenswrapper[7385]: I0319 09:28:17.348763 7385 scope.go:117] "RemoveContainer" containerID="3ed225a36fa4421795f63a78a99d058f08eb76290885a7395566f826ec754799" Mar 19 09:28:17.359817 master-0 kubenswrapper[7385]: I0319 09:28:17.359779 7385 scope.go:117] "RemoveContainer" containerID="cfcf72a5968a35b223ff650bf76501a556c4762493ff456643c088edb64e0ea9" Mar 19 09:28:17.370316 master-0 kubenswrapper[7385]: I0319 09:28:17.370288 7385 scope.go:117] "RemoveContainer" containerID="f969fe9873a3954169d30a02594ff223c659b89547ce589e4efba58ec438e923" Mar 19 09:28:17.380478 master-0 kubenswrapper[7385]: I0319 09:28:17.380438 7385 scope.go:117] "RemoveContainer" containerID="9d6f4e81b24bbc088f03886bb58933c7482c216dc5c189aa0267f9e14838f10a" Mar 19 09:28:17.391661 master-0 kubenswrapper[7385]: I0319 09:28:17.391628 7385 scope.go:117] "RemoveContainer" containerID="6c34410d1b933db4369369ed45b2834d81e2f45432196f8498337d329dbd86c7" Mar 19 09:28:17.627590 master-0 kubenswrapper[7385]: E0319 09:28:17.627445 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:28:17.731859 master-0 kubenswrapper[7385]: I0319 09:28:17.731781 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:17.731859 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:17.731859 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:17.731859 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:17.731859 master-0 kubenswrapper[7385]: I0319 09:28:17.731861 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:18.530006 master-0 kubenswrapper[7385]: I0319 09:28:18.529869 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 09:28:18.545814 master-0 kubenswrapper[7385]: I0319 09:28:18.545716 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 19 09:28:18.549941 master-0 kubenswrapper[7385]: I0319 09:28:18.549864 7385 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:28:18.549941 master-0 kubenswrapper[7385]: I0319 09:28:18.549914 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:28:18.731379 master-0 kubenswrapper[7385]: I0319 09:28:18.731325 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:18.731379 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:18.731379 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:18.731379 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:18.731712 master-0 kubenswrapper[7385]: I0319 09:28:18.731398 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:19.731535 master-0 kubenswrapper[7385]: I0319 09:28:19.731372 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:19.731535 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:19.731535 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:19.731535 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:19.731535 master-0 kubenswrapper[7385]: I0319 09:28:19.731431 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:19.926955 master-0 kubenswrapper[7385]: E0319 09:28:19.926772 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189e33fd371280cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:27:45.874256079 +0000 UTC m=+561.548685780,LastTimestamp:2026-03-19 09:27:45.874256079 +0000 UTC m=+561.548685780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:28:20.732250 master-0 kubenswrapper[7385]: I0319 09:28:20.732124 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:20.732250 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:20.732250 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:20.732250 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:20.732250 master-0 kubenswrapper[7385]: I0319 09:28:20.732226 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:21.731596 master-0 kubenswrapper[7385]: I0319 09:28:21.731503 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:21.731596 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:21.731596 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:21.731596 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:21.732036 master-0 kubenswrapper[7385]: I0319 09:28:21.731625 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:22.731110 master-0 kubenswrapper[7385]: I0319 09:28:22.731039 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:22.731110 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:22.731110 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:22.731110 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:22.731110 master-0 kubenswrapper[7385]: I0319 09:28:22.731105 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:23.731779 master-0 kubenswrapper[7385]: I0319 09:28:23.731700 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:23.731779 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:23.731779 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:23.731779 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:23.732376 master-0 kubenswrapper[7385]: I0319 09:28:23.731811 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:24.731243 master-0 kubenswrapper[7385]: I0319 09:28:24.731147 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:24.731243 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:24.731243 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:24.731243 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:24.731243 master-0 kubenswrapper[7385]: I0319 09:28:24.731222 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:25.108963 master-0 kubenswrapper[7385]: I0319 09:28:25.108911 7385 scope.go:117] "RemoveContainer" containerID="64562b405a3862b5592fdd93f8c95623b24024a5e23281d2b69f8ff3942c63c6" Mar 19 09:28:25.346056 master-0 kubenswrapper[7385]: I0319 09:28:25.346004 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_fe4e3a0b-973b-4534-b91c-1e870e4e5c32/installer/0.log" Mar 19 09:28:25.346230 master-0 kubenswrapper[7385]: I0319 09:28:25.346085 7385 generic.go:334] "Generic (PLEG): container finished" podID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerID="4772110931eb3a91b47fd2a5b7d728bb53faceca1654dd37bae708926fff76ac" exitCode=1 Mar 19 09:28:25.346230 master-0 kubenswrapper[7385]: I0319 09:28:25.346132 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"fe4e3a0b-973b-4534-b91c-1e870e4e5c32","Type":"ContainerDied","Data":"4772110931eb3a91b47fd2a5b7d728bb53faceca1654dd37bae708926fff76ac"} Mar 19 09:28:25.732162 master-0 kubenswrapper[7385]: I0319 09:28:25.732073 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:25.732162 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:25.732162 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:25.732162 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:25.732162 master-0 kubenswrapper[7385]: I0319 09:28:25.732127 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:26.693426 master-0 kubenswrapper[7385]: I0319 09:28:26.693389 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_fe4e3a0b-973b-4534-b91c-1e870e4e5c32/installer/0.log" Mar 19 09:28:26.693630 master-0 kubenswrapper[7385]: I0319 09:28:26.693458 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:28:26.730880 master-0 kubenswrapper[7385]: I0319 09:28:26.730844 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:26.730880 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:26.730880 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:26.730880 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:26.731128 master-0 kubenswrapper[7385]: I0319 09:28:26.730896 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:26.823305 master-0 kubenswrapper[7385]: I0319 09:28:26.822756 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kubelet-dir\") pod \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " Mar 19 09:28:26.823305 master-0 kubenswrapper[7385]: I0319 09:28:26.822880 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-var-lock\") pod \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " Mar 19 09:28:26.823305 master-0 kubenswrapper[7385]: I0319 09:28:26.822961 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fe4e3a0b-973b-4534-b91c-1e870e4e5c32" (UID: "fe4e3a0b-973b-4534-b91c-1e870e4e5c32"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:26.823305 master-0 kubenswrapper[7385]: I0319 09:28:26.823021 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-var-lock" (OuterVolumeSpecName: "var-lock") pod "fe4e3a0b-973b-4534-b91c-1e870e4e5c32" (UID: "fe4e3a0b-973b-4534-b91c-1e870e4e5c32"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:28:26.823305 master-0 kubenswrapper[7385]: I0319 09:28:26.822981 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kube-api-access\") pod \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\" (UID: \"fe4e3a0b-973b-4534-b91c-1e870e4e5c32\") " Mar 19 09:28:26.824499 master-0 kubenswrapper[7385]: I0319 09:28:26.823615 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:26.824499 master-0 kubenswrapper[7385]: I0319 09:28:26.823632 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:26.826486 master-0 kubenswrapper[7385]: I0319 09:28:26.826407 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fe4e3a0b-973b-4534-b91c-1e870e4e5c32" (UID: "fe4e3a0b-973b-4534-b91c-1e870e4e5c32"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:28:26.925145 master-0 kubenswrapper[7385]: I0319 09:28:26.924986 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe4e3a0b-973b-4534-b91c-1e870e4e5c32-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:28:27.364190 master-0 kubenswrapper[7385]: I0319 09:28:27.364130 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_fe4e3a0b-973b-4534-b91c-1e870e4e5c32/installer/0.log" Mar 19 09:28:27.364450 master-0 kubenswrapper[7385]: I0319 09:28:27.364198 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"fe4e3a0b-973b-4534-b91c-1e870e4e5c32","Type":"ContainerDied","Data":"a3e68a93a5e0eb978126226b3b3f9b90c706e1a1f588f63ea47aa67b19c47bdf"} Mar 19 09:28:27.364450 master-0 kubenswrapper[7385]: I0319 09:28:27.364229 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e68a93a5e0eb978126226b3b3f9b90c706e1a1f588f63ea47aa67b19c47bdf" Mar 19 09:28:27.364450 master-0 kubenswrapper[7385]: I0319 09:28:27.364277 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:28:27.628255 master-0 kubenswrapper[7385]: E0319 09:28:27.628067 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 19 09:28:27.731345 master-0 kubenswrapper[7385]: I0319 09:28:27.731284 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:27.731345 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:27.731345 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:27.731345 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:27.731940 master-0 kubenswrapper[7385]: I0319 09:28:27.731893 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:28.732256 master-0 kubenswrapper[7385]: I0319 09:28:28.732184 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:28.732256 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:28.732256 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:28.732256 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:28.733329 master-0 kubenswrapper[7385]: I0319 09:28:28.732270 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:29.464962 master-0 kubenswrapper[7385]: I0319 09:28:29.464879 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:28:29.731401 master-0 kubenswrapper[7385]: I0319 09:28:29.731266 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:29.731401 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:29.731401 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:29.731401 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:29.731401 master-0 kubenswrapper[7385]: I0319 09:28:29.731352 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:30.730993 master-0 kubenswrapper[7385]: I0319 09:28:30.730913 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:30.730993 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:30.730993 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:30.730993 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:30.732253 master-0 kubenswrapper[7385]: I0319 09:28:30.731010 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:31.732439 master-0 kubenswrapper[7385]: I0319 09:28:31.732356 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:31.732439 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:31.732439 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:31.732439 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:31.733420 master-0 kubenswrapper[7385]: I0319 09:28:31.732456 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:32.731356 master-0 kubenswrapper[7385]: I0319 09:28:32.731257 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:32.731356 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:32.731356 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:32.731356 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:32.731356 master-0 kubenswrapper[7385]: I0319 09:28:32.731329 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:33.731079 master-0 kubenswrapper[7385]: I0319 09:28:33.731016 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:33.731079 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:33.731079 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:33.731079 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:33.731643 master-0 kubenswrapper[7385]: I0319 09:28:33.731101 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:34.731536 master-0 kubenswrapper[7385]: I0319 09:28:34.731488 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:34.731536 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:34.731536 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:34.731536 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:34.732348 master-0 kubenswrapper[7385]: I0319 09:28:34.732317 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:35.731419 master-0 kubenswrapper[7385]: I0319 09:28:35.731329 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:35.731419 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:35.731419 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:35.731419 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:35.732064 master-0 kubenswrapper[7385]: I0319 09:28:35.731444 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:36.731785 master-0 kubenswrapper[7385]: I0319 09:28:36.731716 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:36.731785 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:36.731785 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:36.731785 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:36.731785 master-0 kubenswrapper[7385]: I0319 09:28:36.731784 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:37.628692 master-0 kubenswrapper[7385]: E0319 09:28:37.628573 7385 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:28:37.628692 master-0 kubenswrapper[7385]: I0319 09:28:37.628643 7385 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 09:28:37.732284 master-0 kubenswrapper[7385]: I0319 09:28:37.732113 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:37.732284 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:37.732284 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:37.732284 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:37.733369 master-0 kubenswrapper[7385]: I0319 09:28:37.732301 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:38.731030 master-0 kubenswrapper[7385]: I0319 09:28:38.730960 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:38.731030 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:38.731030 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:38.731030 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:38.731449 master-0 kubenswrapper[7385]: I0319 09:28:38.731052 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:39.731730 master-0 kubenswrapper[7385]: I0319 09:28:39.731615 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:39.731730 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:39.731730 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:39.731730 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:39.731730 master-0 kubenswrapper[7385]: I0319 09:28:39.731726 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:40.731170 master-0 kubenswrapper[7385]: I0319 09:28:40.731096 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:40.731170 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:40.731170 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:40.731170 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:40.731170 master-0 kubenswrapper[7385]: I0319 09:28:40.731179 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:41.730739 master-0 kubenswrapper[7385]: I0319 09:28:41.730642 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:41.730739 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:41.730739 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:41.730739 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:41.730739 master-0 kubenswrapper[7385]: I0319 09:28:41.730731 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:42.462890 master-0 kubenswrapper[7385]: I0319 09:28:42.462842 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-t7zwh_47da8964-3606-4181-87fb-8f04a3065295/approver/1.log" Mar 19 09:28:42.463391 master-0 kubenswrapper[7385]: I0319 09:28:42.463353 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-t7zwh_47da8964-3606-4181-87fb-8f04a3065295/approver/0.log" Mar 19 09:28:42.463716 master-0 kubenswrapper[7385]: I0319 09:28:42.463679 7385 generic.go:334] "Generic (PLEG): container finished" podID="47da8964-3606-4181-87fb-8f04a3065295" containerID="380db29610ce50b23d444ae24a9a82ff721513171d94f5e05240298cc4418dff" exitCode=1 Mar 19 09:28:42.463776 master-0 kubenswrapper[7385]: I0319 09:28:42.463715 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-t7zwh" event={"ID":"47da8964-3606-4181-87fb-8f04a3065295","Type":"ContainerDied","Data":"380db29610ce50b23d444ae24a9a82ff721513171d94f5e05240298cc4418dff"} Mar 19 09:28:42.463776 master-0 kubenswrapper[7385]: I0319 09:28:42.463752 7385 scope.go:117] "RemoveContainer" containerID="9b3fc8a626e0487acce62c5d3181f8201f7287976a42754235b1309dbd2babb2" Mar 19 09:28:42.464678 master-0 kubenswrapper[7385]: I0319 09:28:42.464644 7385 scope.go:117] "RemoveContainer" containerID="380db29610ce50b23d444ae24a9a82ff721513171d94f5e05240298cc4418dff" Mar 19 09:28:42.464904 master-0 kubenswrapper[7385]: E0319 09:28:42.464843 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-t7zwh_openshift-network-node-identity(47da8964-3606-4181-87fb-8f04a3065295)\"" pod="openshift-network-node-identity/network-node-identity-t7zwh" podUID="47da8964-3606-4181-87fb-8f04a3065295" Mar 19 09:28:42.732098 master-0 kubenswrapper[7385]: I0319 09:28:42.731932 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:42.732098 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:42.732098 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:42.732098 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:42.732098 master-0 kubenswrapper[7385]: I0319 09:28:42.731995 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:43.481811 master-0 kubenswrapper[7385]: I0319 09:28:43.481753 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-t7zwh_47da8964-3606-4181-87fb-8f04a3065295/approver/1.log" Mar 19 09:28:43.731718 master-0 kubenswrapper[7385]: I0319 09:28:43.731663 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:43.731718 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:43.731718 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:43.731718 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:43.732022 master-0 kubenswrapper[7385]: I0319 09:28:43.731726 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:44.730874 master-0 kubenswrapper[7385]: I0319 09:28:44.730790 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:44.730874 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:44.730874 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:44.730874 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:44.731834 master-0 kubenswrapper[7385]: I0319 09:28:44.730895 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:45.730965 master-0 kubenswrapper[7385]: I0319 09:28:45.730859 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:45.730965 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:45.730965 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:45.730965 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:45.730965 master-0 kubenswrapper[7385]: I0319 09:28:45.730938 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:46.049904 master-0 kubenswrapper[7385]: I0319 09:28:46.049782 7385 status_manager.go:851] "Failed to get status for pod" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" pod="openshift-controller-manager/controller-manager-5bb7458647-2hx6x" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods controller-manager-5bb7458647-2hx6x)" Mar 19 09:28:46.732058 master-0 kubenswrapper[7385]: I0319 09:28:46.731981 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:46.732058 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:46.732058 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:46.732058 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:46.732677 master-0 kubenswrapper[7385]: I0319 09:28:46.732064 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:47.629730 master-0 kubenswrapper[7385]: E0319 09:28:47.629643 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 19 09:28:47.731192 master-0 kubenswrapper[7385]: I0319 09:28:47.731115 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:47.731192 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:47.731192 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:47.731192 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:47.731192 master-0 kubenswrapper[7385]: I0319 09:28:47.731170 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:48.731704 master-0 kubenswrapper[7385]: I0319 09:28:48.731632 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:48.731704 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:48.731704 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:48.731704 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:48.732649 master-0 kubenswrapper[7385]: I0319 09:28:48.731743 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:49.730657 master-0 kubenswrapper[7385]: I0319 09:28:49.730592 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:49.730657 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:49.730657 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:49.730657 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:49.730915 master-0 kubenswrapper[7385]: I0319 09:28:49.730677 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:50.731334 master-0 kubenswrapper[7385]: I0319 09:28:50.731245 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:50.731334 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:50.731334 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:50.731334 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:50.732146 master-0 kubenswrapper[7385]: I0319 09:28:50.731338 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:51.731718 master-0 kubenswrapper[7385]: I0319 09:28:51.731609 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:51.731718 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:51.731718 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:51.731718 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:51.731718 master-0 kubenswrapper[7385]: I0319 09:28:51.731706 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:52.552169 master-0 kubenswrapper[7385]: E0319 09:28:52.552137 7385 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 09:28:52.552840 master-0 kubenswrapper[7385]: I0319 09:28:52.552798 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 19 09:28:52.730326 master-0 kubenswrapper[7385]: I0319 09:28:52.730259 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:52.730326 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:52.730326 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:52.730326 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:52.730568 master-0 kubenswrapper[7385]: I0319 09:28:52.730359 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:53.554704 master-0 kubenswrapper[7385]: I0319 09:28:53.554607 7385 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="0861ab2fae00b28361f12c7b94fd6d71acf9a50d9f9e835730f83b9c6daaad52" exitCode=0 Mar 19 09:28:53.556181 master-0 kubenswrapper[7385]: I0319 09:28:53.554733 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"0861ab2fae00b28361f12c7b94fd6d71acf9a50d9f9e835730f83b9c6daaad52"} Mar 19 09:28:53.556181 master-0 kubenswrapper[7385]: I0319 09:28:53.554829 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"7432a082c2253d23b865426cbd0b7c6fc641fd734bb3b6088975045dd1832638"} Mar 19 09:28:53.556181 master-0 kubenswrapper[7385]: I0319 09:28:53.555387 7385 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:28:53.556181 master-0 kubenswrapper[7385]: I0319 09:28:53.555425 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:28:53.731894 master-0 kubenswrapper[7385]: I0319 09:28:53.731779 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:53.731894 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:53.731894 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:53.731894 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:53.732378 master-0 kubenswrapper[7385]: I0319 09:28:53.731898 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:53.930739 master-0 kubenswrapper[7385]: E0319 09:28:53.930539 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-66b84d69b-vfnhd.189e33def346cf5d openshift-ingress-operator 11259 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-66b84d69b-vfnhd,UID:8bdeb4f3-99f7-44ef-beac-53c3cc073c5a,APIVersion:v1,ResourceVersion:3756,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:25:35 +0000 UTC,LastTimestamp:2026-03-19 09:27:49.083445712 +0000 UTC m=+564.757875403,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:28:54.731315 master-0 kubenswrapper[7385]: I0319 09:28:54.731229 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:54.731315 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:54.731315 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:54.731315 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:54.732377 master-0 kubenswrapper[7385]: I0319 09:28:54.731332 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:55.530104 master-0 kubenswrapper[7385]: I0319 09:28:55.530028 7385 scope.go:117] "RemoveContainer" containerID="380db29610ce50b23d444ae24a9a82ff721513171d94f5e05240298cc4418dff" Mar 19 09:28:55.731739 master-0 kubenswrapper[7385]: I0319 09:28:55.731602 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:55.731739 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:55.731739 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:55.731739 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:55.732246 master-0 kubenswrapper[7385]: I0319 09:28:55.731764 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:56.589505 master-0 kubenswrapper[7385]: I0319 09:28:56.589447 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-t7zwh_47da8964-3606-4181-87fb-8f04a3065295/approver/1.log" Mar 19 09:28:56.590054 master-0 kubenswrapper[7385]: I0319 09:28:56.589989 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-t7zwh" event={"ID":"47da8964-3606-4181-87fb-8f04a3065295","Type":"ContainerStarted","Data":"4d49c28cf13c20b3012192781394a8264c7db48c8cffd90b8888c4312cf635d2"} Mar 19 09:28:56.732238 master-0 kubenswrapper[7385]: I0319 09:28:56.732120 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:56.732238 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:56.732238 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:56.732238 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:56.732238 master-0 kubenswrapper[7385]: I0319 09:28:56.732192 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:57.731273 master-0 kubenswrapper[7385]: I0319 09:28:57.731158 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:57.731273 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:57.731273 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:57.731273 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:57.731896 master-0 kubenswrapper[7385]: I0319 09:28:57.731274 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:57.830141 master-0 kubenswrapper[7385]: E0319 09:28:57.830019 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 19 09:28:58.731226 master-0 kubenswrapper[7385]: I0319 09:28:58.731111 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:58.731226 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:58.731226 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:58.731226 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:58.731593 master-0 kubenswrapper[7385]: I0319 09:28:58.731268 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:28:59.731822 master-0 kubenswrapper[7385]: I0319 09:28:59.731661 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:28:59.731822 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:28:59.731822 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:28:59.731822 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:28:59.731822 master-0 kubenswrapper[7385]: I0319 09:28:59.731793 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:00.731445 master-0 kubenswrapper[7385]: I0319 09:29:00.731379 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:00.731445 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:00.731445 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:00.731445 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:00.731727 master-0 kubenswrapper[7385]: I0319 09:29:00.731464 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:01.730981 master-0 kubenswrapper[7385]: I0319 09:29:01.730907 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:01.730981 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:01.730981 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:01.730981 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:01.731624 master-0 kubenswrapper[7385]: I0319 09:29:01.731016 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:02.730488 master-0 kubenswrapper[7385]: I0319 09:29:02.730404 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:02.730488 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:02.730488 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:02.730488 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:02.730488 master-0 kubenswrapper[7385]: I0319 09:29:02.730482 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:03.731968 master-0 kubenswrapper[7385]: I0319 09:29:03.731870 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:03.731968 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:03.731968 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:03.731968 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:03.732537 master-0 kubenswrapper[7385]: I0319 09:29:03.732011 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:04.732528 master-0 kubenswrapper[7385]: I0319 09:29:04.732442 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:04.732528 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:04.732528 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:04.732528 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:04.733323 master-0 kubenswrapper[7385]: I0319 09:29:04.732535 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:05.731909 master-0 kubenswrapper[7385]: I0319 09:29:05.731806 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:05.731909 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:05.731909 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:05.731909 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:05.732615 master-0 kubenswrapper[7385]: I0319 09:29:05.731918 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:06.731239 master-0 kubenswrapper[7385]: I0319 09:29:06.731187 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:06.731239 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:06.731239 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:06.731239 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:06.731604 master-0 kubenswrapper[7385]: I0319 09:29:06.731257 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:07.731314 master-0 kubenswrapper[7385]: I0319 09:29:07.731228 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:07.731314 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:07.731314 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:07.731314 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:07.732026 master-0 kubenswrapper[7385]: I0319 09:29:07.731317 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:08.231124 master-0 kubenswrapper[7385]: E0319 09:29:08.230939 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 19 09:29:08.731653 master-0 kubenswrapper[7385]: I0319 09:29:08.731592 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:08.731653 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:08.731653 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:08.731653 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:08.732660 master-0 kubenswrapper[7385]: I0319 09:29:08.731666 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:09.731119 master-0 kubenswrapper[7385]: I0319 09:29:09.731030 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:09.731119 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:09.731119 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:09.731119 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:09.731615 master-0 kubenswrapper[7385]: I0319 09:29:09.731120 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:10.731673 master-0 kubenswrapper[7385]: I0319 09:29:10.731567 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:10.731673 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:10.731673 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:10.731673 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:10.732232 master-0 kubenswrapper[7385]: I0319 09:29:10.731681 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:11.730640 master-0 kubenswrapper[7385]: I0319 09:29:11.730580 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:11.730640 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:11.730640 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:11.730640 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:11.730640 master-0 kubenswrapper[7385]: I0319 09:29:11.730643 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:12.731766 master-0 kubenswrapper[7385]: I0319 09:29:12.731679 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:12.731766 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:12.731766 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:12.731766 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:12.731766 master-0 kubenswrapper[7385]: I0319 09:29:12.731756 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:13.732465 master-0 kubenswrapper[7385]: I0319 09:29:13.732343 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:13.732465 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:13.732465 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:13.732465 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:13.732465 master-0 kubenswrapper[7385]: I0319 09:29:13.732459 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:14.715034 master-0 kubenswrapper[7385]: I0319 09:29:14.714977 7385 generic.go:334] "Generic (PLEG): container finished" podID="70e8c62b-97c3-4c0c-85d3-f660118831fd" containerID="3091cd39c91635e4ee1ea702b34d340a7966feb6a8a53ede843ba60081ff82bc" exitCode=0 Mar 19 09:29:14.715034 master-0 kubenswrapper[7385]: I0319 09:29:14.715024 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerDied","Data":"3091cd39c91635e4ee1ea702b34d340a7966feb6a8a53ede843ba60081ff82bc"} Mar 19 09:29:14.715933 master-0 kubenswrapper[7385]: I0319 09:29:14.715074 7385 scope.go:117] "RemoveContainer" containerID="07f85a8394cfe2927824d6dd40beca1cf31136db472d1b09c7b6f5f1e6dae94f" Mar 19 09:29:14.715933 master-0 kubenswrapper[7385]: I0319 09:29:14.715663 7385 scope.go:117] "RemoveContainer" containerID="3091cd39c91635e4ee1ea702b34d340a7966feb6a8a53ede843ba60081ff82bc" Mar 19 09:29:14.715933 master-0 kubenswrapper[7385]: E0319 09:29:14.715853 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-h4zrl_openshift-insights(70e8c62b-97c3-4c0c-85d3-f660118831fd)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" podUID="70e8c62b-97c3-4c0c-85d3-f660118831fd" Mar 19 09:29:14.730958 master-0 kubenswrapper[7385]: I0319 09:29:14.730887 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:14.730958 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:14.730958 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:14.730958 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:14.731229 master-0 kubenswrapper[7385]: I0319 09:29:14.730980 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:15.731037 master-0 kubenswrapper[7385]: I0319 09:29:15.730961 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:15.731037 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:15.731037 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:15.731037 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:15.731711 master-0 kubenswrapper[7385]: I0319 09:29:15.731052 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:16.731587 master-0 kubenswrapper[7385]: I0319 09:29:16.731492 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:16.731587 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:16.731587 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:16.731587 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:16.732265 master-0 kubenswrapper[7385]: I0319 09:29:16.731610 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:17.731014 master-0 kubenswrapper[7385]: I0319 09:29:17.730920 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:17.731014 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:17.731014 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:17.731014 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:17.731686 master-0 kubenswrapper[7385]: I0319 09:29:17.731017 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:18.731209 master-0 kubenswrapper[7385]: I0319 09:29:18.731152 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:18.731209 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:18.731209 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:18.731209 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:18.731209 master-0 kubenswrapper[7385]: I0319 09:29:18.731208 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:19.031885 master-0 kubenswrapper[7385]: E0319 09:29:19.031759 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 19 09:29:19.730734 master-0 kubenswrapper[7385]: I0319 09:29:19.730671 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:19.730734 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:19.730734 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:19.730734 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:19.731000 master-0 kubenswrapper[7385]: I0319 09:29:19.730775 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:20.730448 master-0 kubenswrapper[7385]: I0319 09:29:20.730392 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:20.730448 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:20.730448 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:20.730448 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:20.730989 master-0 kubenswrapper[7385]: I0319 09:29:20.730448 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:21.731363 master-0 kubenswrapper[7385]: I0319 09:29:21.731238 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:21.731363 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:21.731363 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:21.731363 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:21.732114 master-0 kubenswrapper[7385]: I0319 09:29:21.731406 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:22.730665 master-0 kubenswrapper[7385]: I0319 09:29:22.730597 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:22.730665 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:22.730665 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:22.730665 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:22.730931 master-0 kubenswrapper[7385]: I0319 09:29:22.730668 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:23.732695 master-0 kubenswrapper[7385]: I0319 09:29:23.732523 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:23.732695 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:23.732695 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:23.732695 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:23.733708 master-0 kubenswrapper[7385]: I0319 09:29:23.732748 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:24.731105 master-0 kubenswrapper[7385]: I0319 09:29:24.731025 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:24.731105 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:24.731105 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:24.731105 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:24.731105 master-0 kubenswrapper[7385]: I0319 09:29:24.731104 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:25.530341 master-0 kubenswrapper[7385]: I0319 09:29:25.530277 7385 scope.go:117] "RemoveContainer" containerID="3091cd39c91635e4ee1ea702b34d340a7966feb6a8a53ede843ba60081ff82bc" Mar 19 09:29:25.530857 master-0 kubenswrapper[7385]: E0319 09:29:25.530486 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-h4zrl_openshift-insights(70e8c62b-97c3-4c0c-85d3-f660118831fd)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" podUID="70e8c62b-97c3-4c0c-85d3-f660118831fd" Mar 19 09:29:25.730761 master-0 kubenswrapper[7385]: I0319 09:29:25.730669 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:25.730761 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:25.730761 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:25.730761 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:25.731127 master-0 kubenswrapper[7385]: I0319 09:29:25.730775 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:26.731638 master-0 kubenswrapper[7385]: I0319 09:29:26.731532 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:26.731638 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:26.731638 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:26.731638 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:26.732515 master-0 kubenswrapper[7385]: I0319 09:29:26.731643 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:27.559051 master-0 kubenswrapper[7385]: E0319 09:29:27.558977 7385 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 09:29:27.731620 master-0 kubenswrapper[7385]: I0319 09:29:27.731522 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:27.731620 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:27.731620 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:27.731620 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:27.735317 master-0 kubenswrapper[7385]: I0319 09:29:27.731629 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:27.936111 master-0 kubenswrapper[7385]: E0319 09:29:27.935927 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e33f820930de0 openshift-kube-controller-manager 12038 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:67658b93f6f5927402b87ec35623e46e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:27:24 +0000 UTC,LastTimestamp:2026-03-19 09:27:59.160153069 +0000 UTC m=+574.834582770,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:29:28.730810 master-0 kubenswrapper[7385]: I0319 09:29:28.730743 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:28.730810 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:28.730810 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:28.730810 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:28.731092 master-0 kubenswrapper[7385]: I0319 09:29:28.730818 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:28.813524 master-0 kubenswrapper[7385]: I0319 09:29:28.813385 7385 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="88aa2b34d394b9b72033dd41d87e96aa90f1022306b9040706a5972685dd778d" exitCode=0 Mar 19 09:29:28.814050 master-0 kubenswrapper[7385]: I0319 09:29:28.813424 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"88aa2b34d394b9b72033dd41d87e96aa90f1022306b9040706a5972685dd778d"} Mar 19 09:29:28.814717 master-0 kubenswrapper[7385]: I0319 09:29:28.814636 7385 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:29:28.814760 master-0 kubenswrapper[7385]: I0319 09:29:28.814721 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:29:29.731368 master-0 kubenswrapper[7385]: I0319 09:29:29.731296 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:29.731368 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:29.731368 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:29.731368 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:29.731368 master-0 kubenswrapper[7385]: I0319 09:29:29.731423 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:30.633395 master-0 kubenswrapper[7385]: E0319 09:29:30.633299 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 19 09:29:30.731217 master-0 kubenswrapper[7385]: I0319 09:29:30.731142 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:30.731217 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:30.731217 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:30.731217 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:30.731217 master-0 kubenswrapper[7385]: I0319 09:29:30.731212 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:31.730691 master-0 kubenswrapper[7385]: I0319 09:29:31.730624 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:31.730691 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:31.730691 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:31.730691 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:31.730691 master-0 kubenswrapper[7385]: I0319 09:29:31.730694 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:32.736123 master-0 kubenswrapper[7385]: I0319 09:29:32.735971 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:32.736123 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:32.736123 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:32.736123 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:32.736123 master-0 kubenswrapper[7385]: I0319 09:29:32.736071 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:33.731384 master-0 kubenswrapper[7385]: I0319 09:29:33.731307 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:33.731384 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:33.731384 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:33.731384 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:33.731776 master-0 kubenswrapper[7385]: I0319 09:29:33.731390 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:34.731210 master-0 kubenswrapper[7385]: I0319 09:29:34.731114 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:34.731210 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:34.731210 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:34.731210 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:34.731210 master-0 kubenswrapper[7385]: I0319 09:29:34.731178 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:34.850385 master-0 kubenswrapper[7385]: I0319 09:29:34.850325 7385 generic.go:334] "Generic (PLEG): container finished" podID="58fbf09a-3a26-45ab-8496-11d05c27e9cf" containerID="f7583682489ded760629cc15df0f0f40f6512cf0cba6d9c07d62c71cf5d0483d" exitCode=0 Mar 19 09:29:34.850385 master-0 kubenswrapper[7385]: I0319 09:29:34.850377 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" event={"ID":"58fbf09a-3a26-45ab-8496-11d05c27e9cf","Type":"ContainerDied","Data":"f7583682489ded760629cc15df0f0f40f6512cf0cba6d9c07d62c71cf5d0483d"} Mar 19 09:29:34.850922 master-0 kubenswrapper[7385]: I0319 09:29:34.850891 7385 scope.go:117] "RemoveContainer" containerID="f7583682489ded760629cc15df0f0f40f6512cf0cba6d9c07d62c71cf5d0483d" Mar 19 09:29:35.731090 master-0 kubenswrapper[7385]: I0319 09:29:35.730998 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:35.731090 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:35.731090 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:35.731090 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:35.731929 master-0 kubenswrapper[7385]: I0319 09:29:35.731126 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:35.861054 master-0 kubenswrapper[7385]: I0319 09:29:35.860963 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" event={"ID":"58fbf09a-3a26-45ab-8496-11d05c27e9cf","Type":"ContainerStarted","Data":"204f6d30066eb97754fb7af4b5c31c0e648b7d0f92b4cb2d3b52f7525a16b0a7"} Mar 19 09:29:35.861498 master-0 kubenswrapper[7385]: I0319 09:29:35.861452 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:29:35.863523 master-0 kubenswrapper[7385]: I0319 09:29:35.863476 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/1.log" Mar 19 09:29:35.863691 master-0 kubenswrapper[7385]: I0319 09:29:35.863655 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:29:35.864445 master-0 kubenswrapper[7385]: I0319 09:29:35.864384 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/0.log" Mar 19 09:29:35.865098 master-0 kubenswrapper[7385]: I0319 09:29:35.865039 7385 generic.go:334] "Generic (PLEG): container finished" podID="d58c6b38-ef11-465c-9fee-b83b84ce4669" containerID="dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0" exitCode=1 Mar 19 09:29:35.865189 master-0 kubenswrapper[7385]: I0319 09:29:35.865082 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" event={"ID":"d58c6b38-ef11-465c-9fee-b83b84ce4669","Type":"ContainerDied","Data":"dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0"} Mar 19 09:29:35.865189 master-0 kubenswrapper[7385]: I0319 09:29:35.865155 7385 scope.go:117] "RemoveContainer" containerID="742f2b9c536e8374c80963c76d1696cff2ac061aef9be3d98e75e3dbbdd21557" Mar 19 09:29:35.865822 master-0 kubenswrapper[7385]: I0319 09:29:35.865775 7385 scope.go:117] "RemoveContainer" containerID="dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0" Mar 19 09:29:35.866133 master-0 kubenswrapper[7385]: E0319 09:29:35.866078 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-rgzxb_openshift-catalogd(d58c6b38-ef11-465c-9fee-b83b84ce4669)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" podUID="d58c6b38-ef11-465c-9fee-b83b84ce4669" Mar 19 09:29:36.530244 master-0 kubenswrapper[7385]: I0319 09:29:36.530126 7385 scope.go:117] "RemoveContainer" containerID="3091cd39c91635e4ee1ea702b34d340a7966feb6a8a53ede843ba60081ff82bc" Mar 19 09:29:36.731956 master-0 kubenswrapper[7385]: I0319 09:29:36.731853 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:36.731956 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:36.731956 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:36.731956 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:36.731956 master-0 kubenswrapper[7385]: I0319 09:29:36.731944 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:36.874577 master-0 kubenswrapper[7385]: I0319 09:29:36.874470 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/1.log" Mar 19 09:29:36.877157 master-0 kubenswrapper[7385]: I0319 09:29:36.877098 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerStarted","Data":"94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325"} Mar 19 09:29:36.879171 master-0 kubenswrapper[7385]: I0319 09:29:36.879129 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/cluster-cloud-controller-manager/0.log" Mar 19 09:29:36.879263 master-0 kubenswrapper[7385]: I0319 09:29:36.879179 7385 generic.go:334] "Generic (PLEG): container finished" podID="c3610f08-aba1-411d-aa6d-811b88acdb7b" containerID="774e8a3e480c092251698110fbb5b53d79965d955c1c4ce2867552029267208f" exitCode=1 Mar 19 09:29:36.879263 master-0 kubenswrapper[7385]: I0319 09:29:36.879226 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerDied","Data":"774e8a3e480c092251698110fbb5b53d79965d955c1c4ce2867552029267208f"} Mar 19 09:29:36.879968 master-0 kubenswrapper[7385]: I0319 09:29:36.879919 7385 scope.go:117] "RemoveContainer" containerID="774e8a3e480c092251698110fbb5b53d79965d955c1c4ce2867552029267208f" Mar 19 09:29:37.382117 master-0 kubenswrapper[7385]: I0319 09:29:37.382041 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:29:37.382418 master-0 kubenswrapper[7385]: I0319 09:29:37.382135 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:29:37.382510 master-0 kubenswrapper[7385]: I0319 09:29:37.382472 7385 scope.go:117] "RemoveContainer" containerID="dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0" Mar 19 09:29:37.382765 master-0 kubenswrapper[7385]: E0319 09:29:37.382667 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-rgzxb_openshift-catalogd(d58c6b38-ef11-465c-9fee-b83b84ce4669)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" podUID="d58c6b38-ef11-465c-9fee-b83b84ce4669" Mar 19 09:29:37.731760 master-0 kubenswrapper[7385]: I0319 09:29:37.731685 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:37.731760 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:37.731760 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:37.731760 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:37.732655 master-0 kubenswrapper[7385]: I0319 09:29:37.731775 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:37.898583 master-0 kubenswrapper[7385]: I0319 09:29:37.898476 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/cluster-cloud-controller-manager/0.log" Mar 19 09:29:37.898875 master-0 kubenswrapper[7385]: I0319 09:29:37.898586 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerStarted","Data":"fd1bb8b91fa08567396d8c6918b12b546a3b6b07ec7fbd98ce520fe35dbdb340"} Mar 19 09:29:37.899324 master-0 kubenswrapper[7385]: I0319 09:29:37.899270 7385 scope.go:117] "RemoveContainer" containerID="dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0" Mar 19 09:29:37.899596 master-0 kubenswrapper[7385]: E0319 09:29:37.899533 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-rgzxb_openshift-catalogd(d58c6b38-ef11-465c-9fee-b83b84ce4669)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" podUID="d58c6b38-ef11-465c-9fee-b83b84ce4669" Mar 19 09:29:38.731316 master-0 kubenswrapper[7385]: I0319 09:29:38.731222 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:38.731316 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:38.731316 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:38.731316 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:38.731819 master-0 kubenswrapper[7385]: I0319 09:29:38.731323 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:39.731133 master-0 kubenswrapper[7385]: I0319 09:29:39.731041 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:39.731133 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:39.731133 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:39.731133 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:39.732476 master-0 kubenswrapper[7385]: I0319 09:29:39.731148 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:40.730634 master-0 kubenswrapper[7385]: I0319 09:29:40.730591 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:40.730634 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:40.730634 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:40.730634 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:40.731122 master-0 kubenswrapper[7385]: I0319 09:29:40.731093 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:41.731409 master-0 kubenswrapper[7385]: I0319 09:29:41.731340 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:41.731409 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:41.731409 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:41.731409 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:41.732110 master-0 kubenswrapper[7385]: I0319 09:29:41.731438 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:42.731996 master-0 kubenswrapper[7385]: I0319 09:29:42.731921 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:42.731996 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:42.731996 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:42.731996 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:42.732929 master-0 kubenswrapper[7385]: I0319 09:29:42.732895 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:43.730979 master-0 kubenswrapper[7385]: I0319 09:29:43.730888 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:43.730979 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:43.730979 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:43.730979 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:43.730979 master-0 kubenswrapper[7385]: I0319 09:29:43.730976 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:43.833743 master-0 kubenswrapper[7385]: E0319 09:29:43.833652 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 19 09:29:44.732108 master-0 kubenswrapper[7385]: I0319 09:29:44.731989 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:44.732108 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:44.732108 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:44.732108 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:44.732450 master-0 kubenswrapper[7385]: I0319 09:29:44.732119 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:45.731580 master-0 kubenswrapper[7385]: I0319 09:29:45.731466 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:45.731580 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:45.731580 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:45.731580 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:45.732432 master-0 kubenswrapper[7385]: I0319 09:29:45.731613 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:46.051387 master-0 kubenswrapper[7385]: I0319 09:29:46.051243 7385 status_manager.go:851] "Failed to get status for pod" podUID="24b4ed170d527099878cb5fdd508a2fb" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 19 09:29:46.731633 master-0 kubenswrapper[7385]: I0319 09:29:46.731480 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:46.731633 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:46.731633 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:46.731633 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:46.732827 master-0 kubenswrapper[7385]: I0319 09:29:46.731667 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:47.731517 master-0 kubenswrapper[7385]: I0319 09:29:47.731457 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:47.731517 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:47.731517 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:47.731517 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:47.732630 master-0 kubenswrapper[7385]: I0319 09:29:47.732581 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:48.732309 master-0 kubenswrapper[7385]: I0319 09:29:48.732202 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:48.732309 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:48.732309 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:48.732309 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:48.732309 master-0 kubenswrapper[7385]: I0319 09:29:48.732295 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:48.977585 master-0 kubenswrapper[7385]: I0319 09:29:48.977485 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/1.log" Mar 19 09:29:48.979017 master-0 kubenswrapper[7385]: I0319 09:29:48.978989 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/0.log" Mar 19 09:29:48.979190 master-0 kubenswrapper[7385]: I0319 09:29:48.979044 7385 generic.go:334] "Generic (PLEG): container finished" podID="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" containerID="167cce93a07388fd74c14d6f7c9fcb3960b363bc259d8edc2e5ed4f902650640" exitCode=1 Mar 19 09:29:48.979190 master-0 kubenswrapper[7385]: I0319 09:29:48.979081 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" event={"ID":"d5d9fbaf-ba14-4d2b-8376-1634eabbc782","Type":"ContainerDied","Data":"167cce93a07388fd74c14d6f7c9fcb3960b363bc259d8edc2e5ed4f902650640"} Mar 19 09:29:48.979190 master-0 kubenswrapper[7385]: I0319 09:29:48.979125 7385 scope.go:117] "RemoveContainer" containerID="02033eb14ea31d2437ce887b5f2e88f1b7e843f260536c63c7e107349723d088" Mar 19 09:29:48.980343 master-0 kubenswrapper[7385]: I0319 09:29:48.980147 7385 scope.go:117] "RemoveContainer" containerID="167cce93a07388fd74c14d6f7c9fcb3960b363bc259d8edc2e5ed4f902650640" Mar 19 09:29:48.981106 master-0 kubenswrapper[7385]: E0319 09:29:48.981021 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-7v7bv_openshift-operator-controller(d5d9fbaf-ba14-4d2b-8376-1634eabbc782)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" podUID="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" Mar 19 09:29:49.730571 master-0 kubenswrapper[7385]: I0319 09:29:49.730499 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:49.730571 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:49.730571 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:49.730571 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:49.730891 master-0 kubenswrapper[7385]: I0319 09:29:49.730600 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:49.997708 master-0 kubenswrapper[7385]: I0319 09:29:49.997536 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/1.log" Mar 19 09:29:50.731603 master-0 kubenswrapper[7385]: I0319 09:29:50.731457 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:50.731603 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:50.731603 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:50.731603 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:50.731603 master-0 kubenswrapper[7385]: I0319 09:29:50.731582 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:51.529835 master-0 kubenswrapper[7385]: I0319 09:29:51.529775 7385 scope.go:117] "RemoveContainer" containerID="dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0" Mar 19 09:29:51.731029 master-0 kubenswrapper[7385]: I0319 09:29:51.730972 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:29:51.731029 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:29:51.731029 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:29:51.731029 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:29:51.731300 master-0 kubenswrapper[7385]: I0319 09:29:51.731089 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:29:51.731300 master-0 kubenswrapper[7385]: I0319 09:29:51.731165 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:29:51.732064 master-0 kubenswrapper[7385]: I0319 09:29:51.732018 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"2fae7b44934deb2f61dfa30059ff2a9d4e27ce928263e021c35df2bf0416f39e"} pod="openshift-ingress/router-default-7dcf5569b5-k99cg" containerMessage="Container router failed startup probe, will be restarted" Mar 19 09:29:51.732131 master-0 kubenswrapper[7385]: I0319 09:29:51.732078 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" containerID="cri-o://2fae7b44934deb2f61dfa30059ff2a9d4e27ce928263e021c35df2bf0416f39e" gracePeriod=3600 Mar 19 09:29:52.016454 master-0 kubenswrapper[7385]: I0319 09:29:52.016362 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/1.log" Mar 19 09:29:52.017090 master-0 kubenswrapper[7385]: I0319 09:29:52.017041 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/0.log" Mar 19 09:29:52.017162 master-0 kubenswrapper[7385]: I0319 09:29:52.017112 7385 generic.go:334] "Generic (PLEG): container finished" podID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" containerID="955abd98da497abd3bbc8af184913584cf2b14be52bdce5885deda84e0aeecd4" exitCode=1 Mar 19 09:29:52.017211 master-0 kubenswrapper[7385]: I0319 09:29:52.017189 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerDied","Data":"955abd98da497abd3bbc8af184913584cf2b14be52bdce5885deda84e0aeecd4"} Mar 19 09:29:52.017272 master-0 kubenswrapper[7385]: I0319 09:29:52.017233 7385 scope.go:117] "RemoveContainer" containerID="8140af4cb4bb09d2ed5ad0f6ec653bbb3dc06a4515b9db389545823579fd212a" Mar 19 09:29:52.018130 master-0 kubenswrapper[7385]: I0319 09:29:52.018073 7385 scope.go:117] "RemoveContainer" containerID="955abd98da497abd3bbc8af184913584cf2b14be52bdce5885deda84e0aeecd4" Mar 19 09:29:52.018677 master-0 kubenswrapper[7385]: E0319 09:29:52.018527 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-blgk8_openshift-cluster-storage-operator(de72ea6c-f3ce-41a5-9a43-9db4f27ed84b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" podUID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" Mar 19 09:29:52.020192 master-0 kubenswrapper[7385]: I0319 09:29:52.020149 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/1.log" Mar 19 09:29:52.020765 master-0 kubenswrapper[7385]: I0319 09:29:52.020672 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" event={"ID":"d58c6b38-ef11-465c-9fee-b83b84ce4669","Type":"ContainerStarted","Data":"e578bbf86fe3b9980b6396d3f1a052c4c08f43f596b6e24db913110288b64555"} Mar 19 09:29:52.021255 master-0 kubenswrapper[7385]: I0319 09:29:52.021197 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:29:53.033471 master-0 kubenswrapper[7385]: I0319 09:29:53.033417 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/1.log" Mar 19 09:29:53.831535 master-0 kubenswrapper[7385]: I0319 09:29:53.831463 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:29:53.831535 master-0 kubenswrapper[7385]: I0319 09:29:53.831538 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:29:53.832202 master-0 kubenswrapper[7385]: I0319 09:29:53.832168 7385 scope.go:117] "RemoveContainer" containerID="167cce93a07388fd74c14d6f7c9fcb3960b363bc259d8edc2e5ed4f902650640" Mar 19 09:29:53.832577 master-0 kubenswrapper[7385]: E0319 09:29:53.832511 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-7v7bv_openshift-operator-controller(d5d9fbaf-ba14-4d2b-8376-1634eabbc782)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" podUID="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" Mar 19 09:29:54.044601 master-0 kubenswrapper[7385]: I0319 09:29:54.044507 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/config-sync-controllers/0.log" Mar 19 09:29:54.045597 master-0 kubenswrapper[7385]: I0319 09:29:54.045386 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/cluster-cloud-controller-manager/0.log" Mar 19 09:29:54.045597 master-0 kubenswrapper[7385]: I0319 09:29:54.045508 7385 generic.go:334] "Generic (PLEG): container finished" podID="c3610f08-aba1-411d-aa6d-811b88acdb7b" containerID="ccbf8c179749d131ecca685672edda794d3d9e56e155b18ba174f1ad15f4ce67" exitCode=1 Mar 19 09:29:54.045819 master-0 kubenswrapper[7385]: I0319 09:29:54.045600 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerDied","Data":"ccbf8c179749d131ecca685672edda794d3d9e56e155b18ba174f1ad15f4ce67"} Mar 19 09:29:54.046757 master-0 kubenswrapper[7385]: I0319 09:29:54.046699 7385 scope.go:117] "RemoveContainer" containerID="ccbf8c179749d131ecca685672edda794d3d9e56e155b18ba174f1ad15f4ce67" Mar 19 09:29:55.055130 master-0 kubenswrapper[7385]: I0319 09:29:55.055075 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/config-sync-controllers/0.log" Mar 19 09:29:55.055987 master-0 kubenswrapper[7385]: I0319 09:29:55.055932 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/cluster-cloud-controller-manager/0.log" Mar 19 09:29:55.056077 master-0 kubenswrapper[7385]: I0319 09:29:55.056030 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" event={"ID":"c3610f08-aba1-411d-aa6d-811b88acdb7b","Type":"ContainerStarted","Data":"e8c0f88dac84d2fef30527e816ce719aa155ebe96a60a3548691fbc5dd2c7462"} Mar 19 09:29:57.383189 master-0 kubenswrapper[7385]: I0319 09:29:57.383131 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:30:00.235796 master-0 kubenswrapper[7385]: E0319 09:30:00.235723 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 19 09:30:01.939568 master-0 kubenswrapper[7385]: E0319 09:30:01.939390 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e33f83026ed30 openshift-kube-controller-manager 12040 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:67658b93f6f5927402b87ec35623e46e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:27:24 +0000 UTC,LastTimestamp:2026-03-19 09:27:59.357920797 +0000 UTC m=+575.032350498,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:30:02.817815 master-0 kubenswrapper[7385]: E0319 09:30:02.817735 7385 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 09:30:03.104801 master-0 kubenswrapper[7385]: I0319 09:30:03.104377 7385 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="f17b7ff6194fded74c25ab24964fb6d46dcd1d8e29da6ff5d4563dab4dd944c9" exitCode=0 Mar 19 09:30:03.104801 master-0 kubenswrapper[7385]: I0319 09:30:03.104456 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"f17b7ff6194fded74c25ab24964fb6d46dcd1d8e29da6ff5d4563dab4dd944c9"} Mar 19 09:30:03.106281 master-0 kubenswrapper[7385]: I0319 09:30:03.105818 7385 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:30:03.106281 master-0 kubenswrapper[7385]: I0319 09:30:03.105852 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:30:05.530656 master-0 kubenswrapper[7385]: I0319 09:30:05.530570 7385 scope.go:117] "RemoveContainer" containerID="167cce93a07388fd74c14d6f7c9fcb3960b363bc259d8edc2e5ed4f902650640" Mar 19 09:30:06.128598 master-0 kubenswrapper[7385]: I0319 09:30:06.128531 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/1.log" Mar 19 09:30:06.129092 master-0 kubenswrapper[7385]: I0319 09:30:06.129047 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" event={"ID":"d5d9fbaf-ba14-4d2b-8376-1634eabbc782","Type":"ContainerStarted","Data":"56f77b71d7e1bb57c0cf484687e7c7fcc678ee52a7a67be92f2f120e1f5466f4"} Mar 19 09:30:06.129405 master-0 kubenswrapper[7385]: I0319 09:30:06.129320 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:30:07.530391 master-0 kubenswrapper[7385]: I0319 09:30:07.530319 7385 scope.go:117] "RemoveContainer" containerID="955abd98da497abd3bbc8af184913584cf2b14be52bdce5885deda84e0aeecd4" Mar 19 09:30:08.145498 master-0 kubenswrapper[7385]: I0319 09:30:08.145440 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/1.log" Mar 19 09:30:08.145966 master-0 kubenswrapper[7385]: I0319 09:30:08.145528 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerStarted","Data":"f0a12d54d0a014d4222e62ac44038595a5488e58e6bd422a47b37ea0dcba5fe2"} Mar 19 09:30:13.191513 master-0 kubenswrapper[7385]: I0319 09:30:13.191463 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:30:13.192134 master-0 kubenswrapper[7385]: I0319 09:30:13.191522 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="65e7eeeadf0553dafef845c5c629e20ef18e102a3f0f7e94e025271877410b78" exitCode=0 Mar 19 09:30:13.192134 master-0 kubenswrapper[7385]: I0319 09:30:13.191595 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerDied","Data":"65e7eeeadf0553dafef845c5c629e20ef18e102a3f0f7e94e025271877410b78"} Mar 19 09:30:13.192211 master-0 kubenswrapper[7385]: I0319 09:30:13.192201 7385 scope.go:117] "RemoveContainer" containerID="65e7eeeadf0553dafef845c5c629e20ef18e102a3f0f7e94e025271877410b78" Mar 19 09:30:13.193926 master-0 kubenswrapper[7385]: I0319 09:30:13.193888 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/3.log" Mar 19 09:30:13.194806 master-0 kubenswrapper[7385]: I0319 09:30:13.194700 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/2.log" Mar 19 09:30:13.195341 master-0 kubenswrapper[7385]: I0319 09:30:13.195307 7385 generic.go:334] "Generic (PLEG): container finished" podID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" containerID="1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982" exitCode=1 Mar 19 09:30:13.195386 master-0 kubenswrapper[7385]: I0319 09:30:13.195333 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerDied","Data":"1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982"} Mar 19 09:30:13.195386 master-0 kubenswrapper[7385]: I0319 09:30:13.195375 7385 scope.go:117] "RemoveContainer" containerID="0a826efef4d4285208df9ac62804747687dd3c66bd7c0716a36851e3ff4bbfd4" Mar 19 09:30:13.195830 master-0 kubenswrapper[7385]: I0319 09:30:13.195796 7385 scope.go:117] "RemoveContainer" containerID="1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982" Mar 19 09:30:13.196137 master-0 kubenswrapper[7385]: E0319 09:30:13.196102 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:30:13.834581 master-0 kubenswrapper[7385]: I0319 09:30:13.834056 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:30:13.991827 master-0 kubenswrapper[7385]: I0319 09:30:13.991770 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:13.992272 master-0 kubenswrapper[7385]: I0319 09:30:13.992238 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:13.992491 master-0 kubenswrapper[7385]: I0319 09:30:13.992460 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:14.203568 master-0 kubenswrapper[7385]: I0319 09:30:14.203514 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/3.log" Mar 19 09:30:14.207014 master-0 kubenswrapper[7385]: I0319 09:30:14.206978 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:30:14.207087 master-0 kubenswrapper[7385]: I0319 09:30:14.207020 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"7293093e6350962a49b52ca956e7616bb3615bed3553d2b475cc57cee735c3ce"} Mar 19 09:30:17.236831 master-0 kubenswrapper[7385]: E0319 09:30:17.236748 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 09:30:20.243275 master-0 kubenswrapper[7385]: I0319 09:30:20.243149 7385 generic.go:334] "Generic (PLEG): container finished" podID="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" containerID="fcb63173a1674e9ce9fc5d4b055442992b282a4bd8e174a8bafa997bfbff21e0" exitCode=0 Mar 19 09:30:20.243275 master-0 kubenswrapper[7385]: I0319 09:30:20.243209 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" event={"ID":"1187ddcd-3b78-4b3f-9b12-06ce76cb6040","Type":"ContainerDied","Data":"fcb63173a1674e9ce9fc5d4b055442992b282a4bd8e174a8bafa997bfbff21e0"} Mar 19 09:30:20.243887 master-0 kubenswrapper[7385]: I0319 09:30:20.243834 7385 scope.go:117] "RemoveContainer" containerID="fcb63173a1674e9ce9fc5d4b055442992b282a4bd8e174a8bafa997bfbff21e0" Mar 19 09:30:21.250917 master-0 kubenswrapper[7385]: I0319 09:30:21.250770 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-ttn8h_14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/machine-approver-controller/0.log" Mar 19 09:30:21.251682 master-0 kubenswrapper[7385]: I0319 09:30:21.251108 7385 generic.go:334] "Generic (PLEG): container finished" podID="14ee9a22-5b04-402c-98e9-35e2eb7cb2a2" containerID="11c939b60a227283973184abab4a74f274bf3ad0ae2f5315dbbcb266dc260e1c" exitCode=255 Mar 19 09:30:21.251682 master-0 kubenswrapper[7385]: I0319 09:30:21.251163 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" event={"ID":"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2","Type":"ContainerDied","Data":"11c939b60a227283973184abab4a74f274bf3ad0ae2f5315dbbcb266dc260e1c"} Mar 19 09:30:21.251953 master-0 kubenswrapper[7385]: I0319 09:30:21.251916 7385 scope.go:117] "RemoveContainer" containerID="11c939b60a227283973184abab4a74f274bf3ad0ae2f5315dbbcb266dc260e1c" Mar 19 09:30:21.253807 master-0 kubenswrapper[7385]: I0319 09:30:21.253775 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" event={"ID":"1187ddcd-3b78-4b3f-9b12-06ce76cb6040","Type":"ContainerStarted","Data":"feeba086e0f8ca7708caba3c9c0fc59348af07e444f59d1dac4b46bcccfa7e00"} Mar 19 09:30:22.263291 master-0 kubenswrapper[7385]: I0319 09:30:22.263241 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc_3a07456d-2e8e-4e80-a777-d0903ad21f07/cluster-baremetal-operator/0.log" Mar 19 09:30:22.263291 master-0 kubenswrapper[7385]: I0319 09:30:22.263301 7385 generic.go:334] "Generic (PLEG): container finished" podID="3a07456d-2e8e-4e80-a777-d0903ad21f07" containerID="4aeb041310edd04cfbff93e5aeff660e2a5fd04a8635a1408afa36607a005d38" exitCode=1 Mar 19 09:30:22.264022 master-0 kubenswrapper[7385]: I0319 09:30:22.263359 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" event={"ID":"3a07456d-2e8e-4e80-a777-d0903ad21f07","Type":"ContainerDied","Data":"4aeb041310edd04cfbff93e5aeff660e2a5fd04a8635a1408afa36607a005d38"} Mar 19 09:30:22.264022 master-0 kubenswrapper[7385]: I0319 09:30:22.263909 7385 scope.go:117] "RemoveContainer" containerID="4aeb041310edd04cfbff93e5aeff660e2a5fd04a8635a1408afa36607a005d38" Mar 19 09:30:22.267065 master-0 kubenswrapper[7385]: I0319 09:30:22.267028 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-ttn8h_14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/machine-approver-controller/0.log" Mar 19 09:30:22.269187 master-0 kubenswrapper[7385]: I0319 09:30:22.269141 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" event={"ID":"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2","Type":"ContainerStarted","Data":"1bf5522828eba505e7805c8b0371a4b6f227276e398a2adc6f51f11b76a7ddf8"} Mar 19 09:30:23.275965 master-0 kubenswrapper[7385]: I0319 09:30:23.275903 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc_3a07456d-2e8e-4e80-a777-d0903ad21f07/cluster-baremetal-operator/0.log" Mar 19 09:30:23.276581 master-0 kubenswrapper[7385]: I0319 09:30:23.275979 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" event={"ID":"3a07456d-2e8e-4e80-a777-d0903ad21f07","Type":"ContainerStarted","Data":"42d7d82aba9e7b10269b85039d157d860181e8ade15cd12ada9b398768b2c3d9"} Mar 19 09:30:23.991321 master-0 kubenswrapper[7385]: I0319 09:30:23.991185 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:23.991912 master-0 kubenswrapper[7385]: I0319 09:30:23.991855 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:25.287470 master-0 kubenswrapper[7385]: I0319 09:30:25.287328 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-chzwl_cef53432-93f5-4581-b3de-c8cc5cac2ecb/control-plane-machine-set-operator/0.log" Mar 19 09:30:25.287470 master-0 kubenswrapper[7385]: I0319 09:30:25.287376 7385 generic.go:334] "Generic (PLEG): container finished" podID="cef53432-93f5-4581-b3de-c8cc5cac2ecb" containerID="bc9135aad8b62aff6fca98f88f979a784539469fc0e4b4ef505d6e449c8e8562" exitCode=1 Mar 19 09:30:25.287470 master-0 kubenswrapper[7385]: I0319 09:30:25.287411 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" event={"ID":"cef53432-93f5-4581-b3de-c8cc5cac2ecb","Type":"ContainerDied","Data":"bc9135aad8b62aff6fca98f88f979a784539469fc0e4b4ef505d6e449c8e8562"} Mar 19 09:30:25.288630 master-0 kubenswrapper[7385]: I0319 09:30:25.287908 7385 scope.go:117] "RemoveContainer" containerID="bc9135aad8b62aff6fca98f88f979a784539469fc0e4b4ef505d6e449c8e8562" Mar 19 09:30:25.529726 master-0 kubenswrapper[7385]: I0319 09:30:25.529669 7385 scope.go:117] "RemoveContainer" containerID="1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982" Mar 19 09:30:25.529987 master-0 kubenswrapper[7385]: E0319 09:30:25.529884 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:30:26.295271 master-0 kubenswrapper[7385]: I0319 09:30:26.295201 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-chzwl_cef53432-93f5-4581-b3de-c8cc5cac2ecb/control-plane-machine-set-operator/0.log" Mar 19 09:30:26.295271 master-0 kubenswrapper[7385]: I0319 09:30:26.295279 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" event={"ID":"cef53432-93f5-4581-b3de-c8cc5cac2ecb","Type":"ContainerStarted","Data":"b764296a8815095aa3b4677fcb7ea219de573bf736e44d95cdea850e67a2425e"} Mar 19 09:30:26.991951 master-0 kubenswrapper[7385]: I0319 09:30:26.991879 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:30:26.991951 master-0 kubenswrapper[7385]: I0319 09:30:26.991947 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:30:28.310160 master-0 kubenswrapper[7385]: I0319 09:30:28.310102 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 09:30:28.310771 master-0 kubenswrapper[7385]: I0319 09:30:28.310669 7385 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="7414ff0d187efeb091c598330787485add0219e366d0b09f7b817dd18949f28f" exitCode=1 Mar 19 09:30:28.310771 master-0 kubenswrapper[7385]: I0319 09:30:28.310730 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"7414ff0d187efeb091c598330787485add0219e366d0b09f7b817dd18949f28f"} Mar 19 09:30:28.311362 master-0 kubenswrapper[7385]: I0319 09:30:28.311335 7385 scope.go:117] "RemoveContainer" containerID="7414ff0d187efeb091c598330787485add0219e366d0b09f7b817dd18949f28f" Mar 19 09:30:29.318886 master-0 kubenswrapper[7385]: I0319 09:30:29.318832 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 09:30:29.319419 master-0 kubenswrapper[7385]: I0319 09:30:29.319207 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"53fac99b9b6d7113ded13db31c06fb6988d91b7900890060d24517f7c6a3af61"} Mar 19 09:30:29.319455 master-0 kubenswrapper[7385]: I0319 09:30:29.319439 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:30:34.239297 master-0 kubenswrapper[7385]: E0319 09:30:34.239218 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 19 09:30:35.942519 master-0 kubenswrapper[7385]: E0319 09:30:35.942377 7385 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e33f830c62399 openshift-kube-controller-manager 12042 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:67658b93f6f5927402b87ec35623e46e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:27:24 +0000 UTC,LastTimestamp:2026-03-19 09:27:59.367344504 +0000 UTC m=+575.041774225,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:30:36.991867 master-0 kubenswrapper[7385]: I0319 09:30:36.991781 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:30:36.991867 master-0 kubenswrapper[7385]: I0319 09:30:36.991861 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:30:37.109449 master-0 kubenswrapper[7385]: E0319 09:30:37.109356 7385 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 19 09:30:37.383161 master-0 kubenswrapper[7385]: I0319 09:30:37.383109 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"5c20104ce7a41bea06c76dc88ee244675c179a1e54d702272138143050d4f7e0"} Mar 19 09:30:37.530252 master-0 kubenswrapper[7385]: I0319 09:30:37.529931 7385 scope.go:117] "RemoveContainer" containerID="1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982" Mar 19 09:30:37.530252 master-0 kubenswrapper[7385]: E0319 09:30:37.530100 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:30:38.417219 master-0 kubenswrapper[7385]: I0319 09:30:38.417160 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"07ff589d07aa06788418e2b7ce676ec4971687ca5a285dd896ddf4c4eded2fba"} Mar 19 09:30:38.417219 master-0 kubenswrapper[7385]: I0319 09:30:38.417211 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"c5c62153adf3a271102f4d9d5640d2d1802d2bb90e84f132621e7b506077bc80"} Mar 19 09:30:38.417219 master-0 kubenswrapper[7385]: I0319 09:30:38.417225 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"9bf2b58b9edb6985d4157d1a669cb411e7d87e4a40043d1cb4839e8d5c366a20"} Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.419249 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/2.log" Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.419678 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/1.log" Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.419708 7385 generic.go:334] "Generic (PLEG): container finished" podID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" containerID="f0a12d54d0a014d4222e62ac44038595a5488e58e6bd422a47b37ea0dcba5fe2" exitCode=1 Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.419759 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerDied","Data":"f0a12d54d0a014d4222e62ac44038595a5488e58e6bd422a47b37ea0dcba5fe2"} Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.419786 7385 scope.go:117] "RemoveContainer" containerID="955abd98da497abd3bbc8af184913584cf2b14be52bdce5885deda84e0aeecd4" Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.420133 7385 scope.go:117] "RemoveContainer" containerID="f0a12d54d0a014d4222e62ac44038595a5488e58e6bd422a47b37ea0dcba5fe2" Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: E0319 09:30:38.420328 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-blgk8_openshift-cluster-storage-operator(de72ea6c-f3ce-41a5-9a43-9db4f27ed84b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" podUID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.422760 7385 generic.go:334] "Generic (PLEG): container finished" podID="57227a66-c758-4a46-a5e1-f603baa3f570" containerID="2fae7b44934deb2f61dfa30059ff2a9d4e27ce928263e021c35df2bf0416f39e" exitCode=0 Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.422779 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerDied","Data":"2fae7b44934deb2f61dfa30059ff2a9d4e27ce928263e021c35df2bf0416f39e"} Mar 19 09:30:38.428961 master-0 kubenswrapper[7385]: I0319 09:30:38.422796 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerStarted","Data":"fd8bb80d426a5da3f781ac199d36ba296827076a405918db4a564ba51e18307a"} Mar 19 09:30:38.440245 master-0 kubenswrapper[7385]: I0319 09:30:38.440190 7385 scope.go:117] "RemoveContainer" containerID="3a5dd314e61c7e5e336d52053d0330f63d21f00e76686c7b0a177fb71dc220dc" Mar 19 09:30:38.728455 master-0 kubenswrapper[7385]: I0319 09:30:38.728311 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:30:38.730464 master-0 kubenswrapper[7385]: I0319 09:30:38.730426 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:38.730464 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:38.730464 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:38.730464 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:38.730794 master-0 kubenswrapper[7385]: I0319 09:30:38.730479 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:39.434026 master-0 kubenswrapper[7385]: I0319 09:30:39.433918 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"e4c5ba739335e2b30a9fc97ef2c426fd0d64a733b74b4eee96d946d003152a68"} Mar 19 09:30:39.434650 master-0 kubenswrapper[7385]: I0319 09:30:39.434191 7385 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:30:39.434650 master-0 kubenswrapper[7385]: I0319 09:30:39.434225 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="17c0c04f-24dc-4b53-ae4d-657022708825" Mar 19 09:30:39.435853 master-0 kubenswrapper[7385]: I0319 09:30:39.435821 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/2.log" Mar 19 09:30:39.728811 master-0 kubenswrapper[7385]: I0319 09:30:39.728664 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:30:39.730584 master-0 kubenswrapper[7385]: I0319 09:30:39.730532 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:39.730584 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:39.730584 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:39.730584 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:39.730816 master-0 kubenswrapper[7385]: I0319 09:30:39.730603 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:40.731514 master-0 kubenswrapper[7385]: I0319 09:30:40.731439 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:40.731514 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:40.731514 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:40.731514 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:40.731514 master-0 kubenswrapper[7385]: I0319 09:30:40.731502 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:41.731485 master-0 kubenswrapper[7385]: I0319 09:30:41.731405 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:41.731485 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:41.731485 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:41.731485 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:41.732223 master-0 kubenswrapper[7385]: I0319 09:30:41.731499 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:42.553593 master-0 kubenswrapper[7385]: I0319 09:30:42.553456 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 19 09:30:42.553593 master-0 kubenswrapper[7385]: I0319 09:30:42.553515 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 19 09:30:42.732416 master-0 kubenswrapper[7385]: I0319 09:30:42.732310 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:42.732416 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:42.732416 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:42.732416 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:42.733376 master-0 kubenswrapper[7385]: I0319 09:30:42.732408 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:43.731946 master-0 kubenswrapper[7385]: I0319 09:30:43.731881 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:43.731946 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:43.731946 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:43.731946 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:43.732245 master-0 kubenswrapper[7385]: I0319 09:30:43.731978 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:43.992820 master-0 kubenswrapper[7385]: I0319 09:30:43.992618 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Mar 19 09:30:43.992820 master-0 kubenswrapper[7385]: I0319 09:30:43.992688 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 19 09:30:43.992820 master-0 kubenswrapper[7385]: I0319 09:30:43.992738 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:43.993798 master-0 kubenswrapper[7385]: I0319 09:30:43.993438 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7293093e6350962a49b52ca956e7616bb3615bed3553d2b475cc57cee735c3ce"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 19 09:30:43.993798 master-0 kubenswrapper[7385]: I0319 09:30:43.993535 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" containerID="cri-o://7293093e6350962a49b52ca956e7616bb3615bed3553d2b475cc57cee735c3ce" gracePeriod=30 Mar 19 09:30:44.479147 master-0 kubenswrapper[7385]: I0319 09:30:44.479081 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/1.log" Mar 19 09:30:44.481153 master-0 kubenswrapper[7385]: I0319 09:30:44.481109 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:30:44.481153 master-0 kubenswrapper[7385]: I0319 09:30:44.481146 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="7293093e6350962a49b52ca956e7616bb3615bed3553d2b475cc57cee735c3ce" exitCode=255 Mar 19 09:30:44.481357 master-0 kubenswrapper[7385]: I0319 09:30:44.481175 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerDied","Data":"7293093e6350962a49b52ca956e7616bb3615bed3553d2b475cc57cee735c3ce"} Mar 19 09:30:44.481357 master-0 kubenswrapper[7385]: I0319 09:30:44.481203 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"520e1b78ebec36cf1ebd33e486551af8a737a3cccee8978f06e8f3ffb6e71959"} Mar 19 09:30:44.481357 master-0 kubenswrapper[7385]: I0319 09:30:44.481221 7385 scope.go:117] "RemoveContainer" containerID="65e7eeeadf0553dafef845c5c629e20ef18e102a3f0f7e94e025271877410b78" Mar 19 09:30:44.730829 master-0 kubenswrapper[7385]: I0319 09:30:44.730775 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:44.730829 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:44.730829 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:44.730829 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:44.731162 master-0 kubenswrapper[7385]: I0319 09:30:44.730846 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:45.489565 master-0 kubenswrapper[7385]: I0319 09:30:45.489491 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/1.log" Mar 19 09:30:45.491695 master-0 kubenswrapper[7385]: I0319 09:30:45.491656 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:30:45.730580 master-0 kubenswrapper[7385]: I0319 09:30:45.730479 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:45.730580 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:45.730580 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:45.730580 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:45.730580 master-0 kubenswrapper[7385]: I0319 09:30:45.730572 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:46.066731 master-0 kubenswrapper[7385]: I0319 09:30:46.066660 7385 status_manager.go:851] "Failed to get status for pod" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods ingress-operator-66b84d69b-vfnhd)" Mar 19 09:30:46.731363 master-0 kubenswrapper[7385]: I0319 09:30:46.731280 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:46.731363 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:46.731363 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:46.731363 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:46.732408 master-0 kubenswrapper[7385]: I0319 09:30:46.731366 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:47.730930 master-0 kubenswrapper[7385]: I0319 09:30:47.730844 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:47.730930 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:47.730930 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:47.730930 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:47.731404 master-0 kubenswrapper[7385]: I0319 09:30:47.730940 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:48.421708 master-0 kubenswrapper[7385]: I0319 09:30:48.421631 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 09:30:48.427850 master-0 kubenswrapper[7385]: I0319 09:30:48.427767 7385 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 19 09:30:48.433882 master-0 kubenswrapper[7385]: I0319 09:30:48.433807 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 09:30:48.518895 master-0 kubenswrapper[7385]: I0319 09:30:48.517511 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bb7458647-2hx6x"] Mar 19 09:30:48.528856 master-0 kubenswrapper[7385]: I0319 09:30:48.528792 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bb7458647-2hx6x"] Mar 19 09:30:48.539221 master-0 kubenswrapper[7385]: I0319 09:30:48.539182 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f4f7b3-7f79-4618-b87a-400cadcb9813" path="/var/lib/kubelet/pods/c1f4f7b3-7f79-4618-b87a-400cadcb9813/volumes" Mar 19 09:30:48.571051 master-0 kubenswrapper[7385]: I0319 09:30:48.570998 7385 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj"] Mar 19 09:30:48.574290 master-0 kubenswrapper[7385]: I0319 09:30:48.574259 7385 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f897ddc75-l2pbj"] Mar 19 09:30:48.731247 master-0 kubenswrapper[7385]: I0319 09:30:48.731110 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:48.731247 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:48.731247 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:48.731247 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:48.731247 master-0 kubenswrapper[7385]: I0319 09:30:48.731164 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:49.530784 master-0 kubenswrapper[7385]: I0319 09:30:49.530710 7385 scope.go:117] "RemoveContainer" containerID="f0a12d54d0a014d4222e62ac44038595a5488e58e6bd422a47b37ea0dcba5fe2" Mar 19 09:30:49.531167 master-0 kubenswrapper[7385]: E0319 09:30:49.531131 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-blgk8_openshift-cluster-storage-operator(de72ea6c-f3ce-41a5-9a43-9db4f27ed84b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" podUID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" Mar 19 09:30:49.731219 master-0 kubenswrapper[7385]: I0319 09:30:49.731159 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:49.731219 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:49.731219 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:49.731219 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:49.731781 master-0 kubenswrapper[7385]: I0319 09:30:49.731225 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:50.542112 master-0 kubenswrapper[7385]: I0319 09:30:50.542036 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e84d52-ae67-40d0-a2c5-39160b90fa0e" path="/var/lib/kubelet/pods/24e84d52-ae67-40d0-a2c5-39160b90fa0e/volumes" Mar 19 09:30:50.731582 master-0 kubenswrapper[7385]: I0319 09:30:50.731494 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:50.731582 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:50.731582 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:50.731582 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:50.732334 master-0 kubenswrapper[7385]: I0319 09:30:50.731597 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:51.240840 master-0 kubenswrapper[7385]: E0319 09:30:51.240720 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 09:30:51.731315 master-0 kubenswrapper[7385]: I0319 09:30:51.731239 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:51.731315 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:51.731315 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:51.731315 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:51.732138 master-0 kubenswrapper[7385]: I0319 09:30:51.731345 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:52.530424 master-0 kubenswrapper[7385]: I0319 09:30:52.530287 7385 scope.go:117] "RemoveContainer" containerID="1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982" Mar 19 09:30:52.530703 master-0 kubenswrapper[7385]: E0319 09:30:52.530606 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:30:52.577781 master-0 kubenswrapper[7385]: I0319 09:30:52.577723 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 19 09:30:52.731165 master-0 kubenswrapper[7385]: I0319 09:30:52.731109 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:52.731165 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:52.731165 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:52.731165 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:52.731725 master-0 kubenswrapper[7385]: I0319 09:30:52.731676 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:53.732069 master-0 kubenswrapper[7385]: I0319 09:30:53.731936 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:53.732069 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:53.732069 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:53.732069 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:53.732069 master-0 kubenswrapper[7385]: I0319 09:30:53.732052 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:53.991679 master-0 kubenswrapper[7385]: I0319 09:30:53.991397 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:53.992024 master-0 kubenswrapper[7385]: I0319 09:30:53.991952 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:30:54.731647 master-0 kubenswrapper[7385]: I0319 09:30:54.731589 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:54.731647 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:54.731647 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:54.731647 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:54.731990 master-0 kubenswrapper[7385]: I0319 09:30:54.731675 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:55.733030 master-0 kubenswrapper[7385]: I0319 09:30:55.732939 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:55.733030 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:55.733030 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:55.733030 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:55.733635 master-0 kubenswrapper[7385]: I0319 09:30:55.733029 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:56.731622 master-0 kubenswrapper[7385]: I0319 09:30:56.731530 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:56.731622 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:56.731622 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:56.731622 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:56.731622 master-0 kubenswrapper[7385]: I0319 09:30:56.731616 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:56.992438 master-0 kubenswrapper[7385]: I0319 09:30:56.992300 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:30:56.992438 master-0 kubenswrapper[7385]: I0319 09:30:56.992393 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:30:57.567788 master-0 kubenswrapper[7385]: I0319 09:30:57.567731 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 19 09:30:57.730449 master-0 kubenswrapper[7385]: I0319 09:30:57.730333 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:57.730449 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:57.730449 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:57.730449 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:57.730449 master-0 kubenswrapper[7385]: I0319 09:30:57.730402 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:58.731770 master-0 kubenswrapper[7385]: I0319 09:30:58.731669 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:58.731770 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:58.731770 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:58.731770 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:58.731770 master-0 kubenswrapper[7385]: I0319 09:30:58.731765 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:30:59.731172 master-0 kubenswrapper[7385]: I0319 09:30:59.731036 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:30:59.731172 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:30:59.731172 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:30:59.731172 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:30:59.731172 master-0 kubenswrapper[7385]: I0319 09:30:59.731165 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:00.730640 master-0 kubenswrapper[7385]: I0319 09:31:00.730530 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:00.730640 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:00.730640 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:00.730640 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:00.730640 master-0 kubenswrapper[7385]: I0319 09:31:00.730616 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:01.439613 master-0 kubenswrapper[7385]: E0319 09:31:01.439495 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 19 09:31:01.732477 master-0 kubenswrapper[7385]: I0319 09:31:01.732234 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:01.732477 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:01.732477 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:01.732477 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:01.733868 master-0 kubenswrapper[7385]: I0319 09:31:01.733272 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:02.731609 master-0 kubenswrapper[7385]: I0319 09:31:02.731460 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:02.731609 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:02.731609 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:02.731609 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:02.731609 master-0 kubenswrapper[7385]: I0319 09:31:02.731566 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:03.530028 master-0 kubenswrapper[7385]: I0319 09:31:03.529958 7385 scope.go:117] "RemoveContainer" containerID="f0a12d54d0a014d4222e62ac44038595a5488e58e6bd422a47b37ea0dcba5fe2" Mar 19 09:31:03.730703 master-0 kubenswrapper[7385]: I0319 09:31:03.730631 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:03.730703 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:03.730703 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:03.730703 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:03.731047 master-0 kubenswrapper[7385]: I0319 09:31:03.730715 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:04.633867 master-0 kubenswrapper[7385]: I0319 09:31:04.633744 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/2.log" Mar 19 09:31:04.633867 master-0 kubenswrapper[7385]: I0319 09:31:04.633861 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerStarted","Data":"2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4"} Mar 19 09:31:04.731520 master-0 kubenswrapper[7385]: I0319 09:31:04.731453 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:04.731520 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:04.731520 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:04.731520 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:04.731794 master-0 kubenswrapper[7385]: I0319 09:31:04.731533 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:05.531242 master-0 kubenswrapper[7385]: I0319 09:31:05.531142 7385 scope.go:117] "RemoveContainer" containerID="1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982" Mar 19 09:31:05.731139 master-0 kubenswrapper[7385]: I0319 09:31:05.731086 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:05.731139 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:05.731139 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:05.731139 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:05.731871 master-0 kubenswrapper[7385]: I0319 09:31:05.731153 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:06.648948 master-0 kubenswrapper[7385]: I0319 09:31:06.648903 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/3.log" Mar 19 09:31:06.649497 master-0 kubenswrapper[7385]: I0319 09:31:06.649439 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7"} Mar 19 09:31:06.732785 master-0 kubenswrapper[7385]: I0319 09:31:06.732718 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:06.732785 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:06.732785 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:06.732785 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:06.733307 master-0 kubenswrapper[7385]: I0319 09:31:06.732797 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:06.992206 master-0 kubenswrapper[7385]: I0319 09:31:06.992057 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:31:06.992206 master-0 kubenswrapper[7385]: I0319 09:31:06.992145 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:31:07.732471 master-0 kubenswrapper[7385]: I0319 09:31:07.732379 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:07.732471 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:07.732471 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:07.732471 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:07.732471 master-0 kubenswrapper[7385]: I0319 09:31:07.732456 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:08.243077 master-0 kubenswrapper[7385]: E0319 09:31:08.242909 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 09:31:08.731326 master-0 kubenswrapper[7385]: I0319 09:31:08.731272 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:08.731326 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:08.731326 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:08.731326 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:08.731718 master-0 kubenswrapper[7385]: I0319 09:31:08.731337 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:09.732414 master-0 kubenswrapper[7385]: I0319 09:31:09.732313 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:09.732414 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:09.732414 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:09.732414 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:09.733425 master-0 kubenswrapper[7385]: I0319 09:31:09.732430 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:10.731475 master-0 kubenswrapper[7385]: I0319 09:31:10.731380 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:10.731475 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:10.731475 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:10.731475 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:10.732033 master-0 kubenswrapper[7385]: I0319 09:31:10.731488 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:11.586034 master-0 kubenswrapper[7385]: I0319 09:31:11.585967 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 19 09:31:11.732258 master-0 kubenswrapper[7385]: I0319 09:31:11.732194 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:11.732258 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:11.732258 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:11.732258 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:11.732837 master-0 kubenswrapper[7385]: I0319 09:31:11.732288 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:12.732484 master-0 kubenswrapper[7385]: I0319 09:31:12.732362 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:12.732484 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:12.732484 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:12.732484 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:12.733622 master-0 kubenswrapper[7385]: I0319 09:31:12.732486 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:13.731839 master-0 kubenswrapper[7385]: I0319 09:31:13.731759 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:13.731839 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:13.731839 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:13.731839 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:13.731839 master-0 kubenswrapper[7385]: I0319 09:31:13.731836 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:14.568920 master-0 kubenswrapper[7385]: I0319 09:31:14.568816 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:37016->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 19 09:31:14.570128 master-0 kubenswrapper[7385]: I0319 09:31:14.568919 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:37016->127.0.0.1:10357: read: connection reset by peer" Mar 19 09:31:14.570128 master-0 kubenswrapper[7385]: I0319 09:31:14.569008 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:31:14.572186 master-0 kubenswrapper[7385]: I0319 09:31:14.570523 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"520e1b78ebec36cf1ebd33e486551af8a737a3cccee8978f06e8f3ffb6e71959"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 19 09:31:14.572186 master-0 kubenswrapper[7385]: I0319 09:31:14.570745 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" containerID="cri-o://520e1b78ebec36cf1ebd33e486551af8a737a3cccee8978f06e8f3ffb6e71959" gracePeriod=30 Mar 19 09:31:14.589425 master-0 kubenswrapper[7385]: I0319 09:31:14.589311 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=3.589284068 podStartE2EDuration="3.589284068s" podCreationTimestamp="2026-03-19 09:31:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:31:14.582965276 +0000 UTC m=+770.257395017" watchObservedRunningTime="2026-03-19 09:31:14.589284068 +0000 UTC m=+770.263713819" Mar 19 09:31:14.707737 master-0 kubenswrapper[7385]: I0319 09:31:14.707648 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/2.log" Mar 19 09:31:14.708145 master-0 kubenswrapper[7385]: I0319 09:31:14.708100 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/1.log" Mar 19 09:31:14.709983 master-0 kubenswrapper[7385]: I0319 09:31:14.709933 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:31:14.710111 master-0 kubenswrapper[7385]: I0319 09:31:14.709988 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="520e1b78ebec36cf1ebd33e486551af8a737a3cccee8978f06e8f3ffb6e71959" exitCode=255 Mar 19 09:31:14.710111 master-0 kubenswrapper[7385]: I0319 09:31:14.710026 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerDied","Data":"520e1b78ebec36cf1ebd33e486551af8a737a3cccee8978f06e8f3ffb6e71959"} Mar 19 09:31:14.710111 master-0 kubenswrapper[7385]: I0319 09:31:14.710064 7385 scope.go:117] "RemoveContainer" containerID="7293093e6350962a49b52ca956e7616bb3615bed3553d2b475cc57cee735c3ce" Mar 19 09:31:14.731182 master-0 kubenswrapper[7385]: I0319 09:31:14.731073 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:14.731182 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:14.731182 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:14.731182 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:14.731182 master-0 kubenswrapper[7385]: I0319 09:31:14.731139 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:15.719837 master-0 kubenswrapper[7385]: I0319 09:31:15.719711 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/2.log" Mar 19 09:31:15.722062 master-0 kubenswrapper[7385]: I0319 09:31:15.722008 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:31:15.722222 master-0 kubenswrapper[7385]: I0319 09:31:15.722084 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c"} Mar 19 09:31:15.731256 master-0 kubenswrapper[7385]: I0319 09:31:15.731198 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:15.731256 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:15.731256 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:15.731256 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:15.731782 master-0 kubenswrapper[7385]: I0319 09:31:15.731279 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:16.777181 master-0 kubenswrapper[7385]: I0319 09:31:16.777115 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:16.777181 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:16.777181 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:16.777181 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:16.777894 master-0 kubenswrapper[7385]: I0319 09:31:16.777194 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:17.731174 master-0 kubenswrapper[7385]: I0319 09:31:17.731074 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:17.731174 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:17.731174 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:17.731174 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:17.731174 master-0 kubenswrapper[7385]: I0319 09:31:17.731150 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:18.731059 master-0 kubenswrapper[7385]: I0319 09:31:18.731008 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:18.731059 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:18.731059 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:18.731059 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:18.731817 master-0 kubenswrapper[7385]: I0319 09:31:18.731687 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:19.464210 master-0 kubenswrapper[7385]: I0319 09:31:19.464042 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:31:19.731348 master-0 kubenswrapper[7385]: I0319 09:31:19.731196 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:19.731348 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:19.731348 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:19.731348 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:19.731348 master-0 kubenswrapper[7385]: I0319 09:31:19.731279 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:20.730070 master-0 kubenswrapper[7385]: I0319 09:31:20.730007 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:20.730070 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:20.730070 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:20.730070 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:20.730070 master-0 kubenswrapper[7385]: I0319 09:31:20.730070 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:21.732436 master-0 kubenswrapper[7385]: I0319 09:31:21.732355 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:21.732436 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:21.732436 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:21.732436 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:21.733134 master-0 kubenswrapper[7385]: I0319 09:31:21.732459 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:22.732656 master-0 kubenswrapper[7385]: I0319 09:31:22.732187 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:22.732656 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:22.732656 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:22.732656 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:22.732656 master-0 kubenswrapper[7385]: I0319 09:31:22.732299 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:22.775153 master-0 kubenswrapper[7385]: I0319 09:31:22.775098 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc_3a07456d-2e8e-4e80-a777-d0903ad21f07/cluster-baremetal-operator/1.log" Mar 19 09:31:22.776812 master-0 kubenswrapper[7385]: I0319 09:31:22.776780 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc_3a07456d-2e8e-4e80-a777-d0903ad21f07/cluster-baremetal-operator/0.log" Mar 19 09:31:22.776873 master-0 kubenswrapper[7385]: I0319 09:31:22.776842 7385 generic.go:334] "Generic (PLEG): container finished" podID="3a07456d-2e8e-4e80-a777-d0903ad21f07" containerID="42d7d82aba9e7b10269b85039d157d860181e8ade15cd12ada9b398768b2c3d9" exitCode=1 Mar 19 09:31:22.776914 master-0 kubenswrapper[7385]: I0319 09:31:22.776880 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" event={"ID":"3a07456d-2e8e-4e80-a777-d0903ad21f07","Type":"ContainerDied","Data":"42d7d82aba9e7b10269b85039d157d860181e8ade15cd12ada9b398768b2c3d9"} Mar 19 09:31:22.776945 master-0 kubenswrapper[7385]: I0319 09:31:22.776935 7385 scope.go:117] "RemoveContainer" containerID="4aeb041310edd04cfbff93e5aeff660e2a5fd04a8635a1408afa36607a005d38" Mar 19 09:31:22.779432 master-0 kubenswrapper[7385]: I0319 09:31:22.779365 7385 scope.go:117] "RemoveContainer" containerID="42d7d82aba9e7b10269b85039d157d860181e8ade15cd12ada9b398768b2c3d9" Mar 19 09:31:22.780390 master-0 kubenswrapper[7385]: E0319 09:31:22.780318 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-sw7cc_openshift-machine-api(3a07456d-2e8e-4e80-a777-d0903ad21f07)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" podUID="3a07456d-2e8e-4e80-a777-d0903ad21f07" Mar 19 09:31:23.731154 master-0 kubenswrapper[7385]: I0319 09:31:23.731076 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:23.731154 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:23.731154 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:23.731154 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:23.731532 master-0 kubenswrapper[7385]: I0319 09:31:23.731166 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:23.783534 master-0 kubenswrapper[7385]: I0319 09:31:23.783482 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc_3a07456d-2e8e-4e80-a777-d0903ad21f07/cluster-baremetal-operator/1.log" Mar 19 09:31:23.991061 master-0 kubenswrapper[7385]: I0319 09:31:23.990911 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:31:23.991250 master-0 kubenswrapper[7385]: I0319 09:31:23.991103 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:31:24.730908 master-0 kubenswrapper[7385]: I0319 09:31:24.730803 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:24.730908 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:24.730908 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:24.730908 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:24.731301 master-0 kubenswrapper[7385]: I0319 09:31:24.730898 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:25.244249 master-0 kubenswrapper[7385]: E0319 09:31:25.244148 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 09:31:25.732475 master-0 kubenswrapper[7385]: I0319 09:31:25.732415 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:25.732475 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:25.732475 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:25.732475 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:25.732792 master-0 kubenswrapper[7385]: I0319 09:31:25.732483 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:26.731361 master-0 kubenswrapper[7385]: I0319 09:31:26.731270 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:26.731361 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:26.731361 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:26.731361 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:26.732028 master-0 kubenswrapper[7385]: I0319 09:31:26.731400 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:26.992127 master-0 kubenswrapper[7385]: I0319 09:31:26.991904 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:31:26.992127 master-0 kubenswrapper[7385]: I0319 09:31:26.992053 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:31:27.731311 master-0 kubenswrapper[7385]: I0319 09:31:27.731238 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:27.731311 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:27.731311 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:27.731311 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:27.731311 master-0 kubenswrapper[7385]: I0319 09:31:27.731307 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:28.731171 master-0 kubenswrapper[7385]: I0319 09:31:28.731040 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:28.731171 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:28.731171 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:28.731171 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:28.731171 master-0 kubenswrapper[7385]: I0319 09:31:28.731103 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:29.730612 master-0 kubenswrapper[7385]: I0319 09:31:29.730417 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:29.730612 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:29.730612 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:29.730612 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:29.730612 master-0 kubenswrapper[7385]: I0319 09:31:29.730487 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:30.730839 master-0 kubenswrapper[7385]: I0319 09:31:30.730746 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:30.730839 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:30.730839 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:30.730839 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:30.730839 master-0 kubenswrapper[7385]: I0319 09:31:30.730833 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:31.731449 master-0 kubenswrapper[7385]: I0319 09:31:31.731144 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:31.731449 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:31.731449 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:31.731449 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:31.731449 master-0 kubenswrapper[7385]: I0319 09:31:31.731255 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:32.731866 master-0 kubenswrapper[7385]: I0319 09:31:32.731757 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:32.731866 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:32.731866 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:32.731866 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:32.731866 master-0 kubenswrapper[7385]: I0319 09:31:32.731852 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:33.731529 master-0 kubenswrapper[7385]: I0319 09:31:33.731430 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:33.731529 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:33.731529 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:33.731529 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:33.731529 master-0 kubenswrapper[7385]: I0319 09:31:33.731510 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:33.858327 master-0 kubenswrapper[7385]: I0319 09:31:33.858230 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/3.log" Mar 19 09:31:33.858905 master-0 kubenswrapper[7385]: I0319 09:31:33.858844 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/2.log" Mar 19 09:31:33.858905 master-0 kubenswrapper[7385]: I0319 09:31:33.858899 7385 generic.go:334] "Generic (PLEG): container finished" podID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" containerID="2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4" exitCode=1 Mar 19 09:31:33.859170 master-0 kubenswrapper[7385]: I0319 09:31:33.858932 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerDied","Data":"2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4"} Mar 19 09:31:33.859170 master-0 kubenswrapper[7385]: I0319 09:31:33.858976 7385 scope.go:117] "RemoveContainer" containerID="f0a12d54d0a014d4222e62ac44038595a5488e58e6bd422a47b37ea0dcba5fe2" Mar 19 09:31:33.859727 master-0 kubenswrapper[7385]: I0319 09:31:33.859667 7385 scope.go:117] "RemoveContainer" containerID="2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4" Mar 19 09:31:33.860036 master-0 kubenswrapper[7385]: E0319 09:31:33.859984 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-blgk8_openshift-cluster-storage-operator(de72ea6c-f3ce-41a5-9a43-9db4f27ed84b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" podUID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" Mar 19 09:31:34.730870 master-0 kubenswrapper[7385]: I0319 09:31:34.730819 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:34.730870 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:34.730870 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:34.730870 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:34.731212 master-0 kubenswrapper[7385]: I0319 09:31:34.730878 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:34.865386 master-0 kubenswrapper[7385]: I0319 09:31:34.865329 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/3.log" Mar 19 09:31:35.530565 master-0 kubenswrapper[7385]: I0319 09:31:35.530492 7385 scope.go:117] "RemoveContainer" containerID="42d7d82aba9e7b10269b85039d157d860181e8ade15cd12ada9b398768b2c3d9" Mar 19 09:31:35.731152 master-0 kubenswrapper[7385]: I0319 09:31:35.731078 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:35.731152 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:35.731152 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:35.731152 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:35.731426 master-0 kubenswrapper[7385]: I0319 09:31:35.731178 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:35.873728 master-0 kubenswrapper[7385]: I0319 09:31:35.873673 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc_3a07456d-2e8e-4e80-a777-d0903ad21f07/cluster-baremetal-operator/1.log" Mar 19 09:31:35.874274 master-0 kubenswrapper[7385]: I0319 09:31:35.874064 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" event={"ID":"3a07456d-2e8e-4e80-a777-d0903ad21f07","Type":"ContainerStarted","Data":"84dc95679872295361f7662aa0f240ba079fe89ee66813aa94018787c1925f42"} Mar 19 09:31:36.731823 master-0 kubenswrapper[7385]: I0319 09:31:36.731762 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:36.731823 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:36.731823 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:36.731823 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:36.732231 master-0 kubenswrapper[7385]: I0319 09:31:36.731833 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:36.992170 master-0 kubenswrapper[7385]: I0319 09:31:36.992006 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:31:36.992170 master-0 kubenswrapper[7385]: I0319 09:31:36.992136 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:31:37.732325 master-0 kubenswrapper[7385]: I0319 09:31:37.732216 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:37.732325 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:37.732325 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:37.732325 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:37.732848 master-0 kubenswrapper[7385]: I0319 09:31:37.732319 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:38.731734 master-0 kubenswrapper[7385]: I0319 09:31:38.731631 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:38.731734 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:38.731734 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:38.731734 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:38.732847 master-0 kubenswrapper[7385]: I0319 09:31:38.731727 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:39.730509 master-0 kubenswrapper[7385]: I0319 09:31:39.730422 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:39.730509 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:39.730509 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:39.730509 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:39.730787 master-0 kubenswrapper[7385]: I0319 09:31:39.730591 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:40.730952 master-0 kubenswrapper[7385]: I0319 09:31:40.730844 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:40.730952 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:40.730952 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:40.730952 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:40.731560 master-0 kubenswrapper[7385]: I0319 09:31:40.730972 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:41.731006 master-0 kubenswrapper[7385]: I0319 09:31:41.730913 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:41.731006 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:41.731006 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:41.731006 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:41.731658 master-0 kubenswrapper[7385]: I0319 09:31:41.731030 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:42.245304 master-0 kubenswrapper[7385]: E0319 09:31:42.245205 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 19 09:31:42.731045 master-0 kubenswrapper[7385]: I0319 09:31:42.730967 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:42.731045 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:42.731045 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:42.731045 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:42.731045 master-0 kubenswrapper[7385]: I0319 09:31:42.731033 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:43.731339 master-0 kubenswrapper[7385]: I0319 09:31:43.731277 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:43.731339 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:43.731339 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:43.731339 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:43.731920 master-0 kubenswrapper[7385]: I0319 09:31:43.731351 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:44.732732 master-0 kubenswrapper[7385]: I0319 09:31:44.732648 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:44.732732 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:44.732732 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:44.732732 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:44.733718 master-0 kubenswrapper[7385]: I0319 09:31:44.732748 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:45.490855 master-0 kubenswrapper[7385]: I0319 09:31:45.490028 7385 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:58540->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 19 09:31:45.490855 master-0 kubenswrapper[7385]: I0319 09:31:45.490093 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:58540->127.0.0.1:10357: read: connection reset by peer" Mar 19 09:31:45.490855 master-0 kubenswrapper[7385]: I0319 09:31:45.490147 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:31:45.491597 master-0 kubenswrapper[7385]: I0319 09:31:45.491534 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 19 09:31:45.493035 master-0 kubenswrapper[7385]: I0319 09:31:45.492991 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" containerID="cri-o://d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" gracePeriod=30 Mar 19 09:31:45.509008 master-0 kubenswrapper[7385]: E0319 09:31:45.508901 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(67658b93f6f5927402b87ec35623e46e)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" Mar 19 09:31:45.731606 master-0 kubenswrapper[7385]: I0319 09:31:45.731501 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:45.731606 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:45.731606 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:45.731606 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:45.732108 master-0 kubenswrapper[7385]: I0319 09:31:45.731642 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:45.953465 master-0 kubenswrapper[7385]: I0319 09:31:45.953381 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/3.log" Mar 19 09:31:45.954749 master-0 kubenswrapper[7385]: I0319 09:31:45.954095 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/2.log" Mar 19 09:31:45.956703 master-0 kubenswrapper[7385]: I0319 09:31:45.956642 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:31:45.956856 master-0 kubenswrapper[7385]: I0319 09:31:45.956709 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" exitCode=255 Mar 19 09:31:45.956856 master-0 kubenswrapper[7385]: I0319 09:31:45.956746 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerDied","Data":"d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c"} Mar 19 09:31:45.956856 master-0 kubenswrapper[7385]: I0319 09:31:45.956783 7385 scope.go:117] "RemoveContainer" containerID="520e1b78ebec36cf1ebd33e486551af8a737a3cccee8978f06e8f3ffb6e71959" Mar 19 09:31:45.958080 master-0 kubenswrapper[7385]: I0319 09:31:45.958017 7385 scope.go:117] "RemoveContainer" containerID="d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" Mar 19 09:31:45.958873 master-0 kubenswrapper[7385]: E0319 09:31:45.958803 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(67658b93f6f5927402b87ec35623e46e)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" Mar 19 09:31:46.731646 master-0 kubenswrapper[7385]: I0319 09:31:46.731556 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:46.731646 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:46.731646 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:46.731646 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:46.731646 master-0 kubenswrapper[7385]: I0319 09:31:46.731642 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:46.965742 master-0 kubenswrapper[7385]: I0319 09:31:46.965680 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/3.log" Mar 19 09:31:46.967515 master-0 kubenswrapper[7385]: I0319 09:31:46.967467 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:31:47.732304 master-0 kubenswrapper[7385]: I0319 09:31:47.732160 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:47.732304 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:47.732304 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:47.732304 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:47.733127 master-0 kubenswrapper[7385]: I0319 09:31:47.732308 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:48.533350 master-0 kubenswrapper[7385]: I0319 09:31:48.533264 7385 scope.go:117] "RemoveContainer" containerID="2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4" Mar 19 09:31:48.534456 master-0 kubenswrapper[7385]: E0319 09:31:48.533529 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-blgk8_openshift-cluster-storage-operator(de72ea6c-f3ce-41a5-9a43-9db4f27ed84b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" podUID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" Mar 19 09:31:48.732182 master-0 kubenswrapper[7385]: I0319 09:31:48.732109 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:48.732182 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:48.732182 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:48.732182 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:48.732627 master-0 kubenswrapper[7385]: I0319 09:31:48.732193 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:49.732350 master-0 kubenswrapper[7385]: I0319 09:31:49.732281 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:49.732350 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:49.732350 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:49.732350 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:49.733334 master-0 kubenswrapper[7385]: I0319 09:31:49.732365 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:50.731405 master-0 kubenswrapper[7385]: I0319 09:31:50.731330 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:50.731405 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:50.731405 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:50.731405 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:50.731906 master-0 kubenswrapper[7385]: I0319 09:31:50.731424 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:51.732204 master-0 kubenswrapper[7385]: I0319 09:31:51.732117 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:51.732204 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:51.732204 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:51.732204 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:51.732204 master-0 kubenswrapper[7385]: I0319 09:31:51.732188 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:52.731319 master-0 kubenswrapper[7385]: I0319 09:31:52.731243 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:52.731319 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:52.731319 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:52.731319 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:52.731957 master-0 kubenswrapper[7385]: I0319 09:31:52.731337 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:53.730689 master-0 kubenswrapper[7385]: I0319 09:31:53.730638 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:53.730689 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:53.730689 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:53.730689 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:53.731367 master-0 kubenswrapper[7385]: I0319 09:31:53.730719 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:53.991751 master-0 kubenswrapper[7385]: I0319 09:31:53.991625 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:31:53.992363 master-0 kubenswrapper[7385]: I0319 09:31:53.992323 7385 scope.go:117] "RemoveContainer" containerID="d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" Mar 19 09:31:53.992579 master-0 kubenswrapper[7385]: E0319 09:31:53.992536 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(67658b93f6f5927402b87ec35623e46e)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" Mar 19 09:31:54.731664 master-0 kubenswrapper[7385]: I0319 09:31:54.731608 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:54.731664 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:54.731664 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:54.731664 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:54.732361 master-0 kubenswrapper[7385]: I0319 09:31:54.732326 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:55.730251 master-0 kubenswrapper[7385]: I0319 09:31:55.730173 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:55.730251 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:55.730251 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:55.730251 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:55.730594 master-0 kubenswrapper[7385]: I0319 09:31:55.730273 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:56.731076 master-0 kubenswrapper[7385]: I0319 09:31:56.730936 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:56.731076 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:56.731076 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:56.731076 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:56.731076 master-0 kubenswrapper[7385]: I0319 09:31:56.731022 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:57.731365 master-0 kubenswrapper[7385]: I0319 09:31:57.731300 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:57.731365 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:57.731365 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:57.731365 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:57.731975 master-0 kubenswrapper[7385]: I0319 09:31:57.731367 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:58.731648 master-0 kubenswrapper[7385]: I0319 09:31:58.731531 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:58.731648 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:58.731648 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:58.731648 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:58.731648 master-0 kubenswrapper[7385]: I0319 09:31:58.731631 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:31:59.246714 master-0 kubenswrapper[7385]: E0319 09:31:59.246634 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 09:31:59.730854 master-0 kubenswrapper[7385]: I0319 09:31:59.730804 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:31:59.730854 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:31:59.730854 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:31:59.730854 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:31:59.731201 master-0 kubenswrapper[7385]: I0319 09:31:59.730872 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:00.530178 master-0 kubenswrapper[7385]: I0319 09:32:00.530107 7385 scope.go:117] "RemoveContainer" containerID="2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4" Mar 19 09:32:00.530729 master-0 kubenswrapper[7385]: E0319 09:32:00.530351 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-blgk8_openshift-cluster-storage-operator(de72ea6c-f3ce-41a5-9a43-9db4f27ed84b)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" podUID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" Mar 19 09:32:00.731135 master-0 kubenswrapper[7385]: I0319 09:32:00.731066 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:00.731135 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:00.731135 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:00.731135 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:00.731516 master-0 kubenswrapper[7385]: I0319 09:32:00.731152 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:00.766305 master-0 kubenswrapper[7385]: I0319 09:32:00.766233 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 19 09:32:00.766565 master-0 kubenswrapper[7385]: E0319 09:32:00.766478 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5780efa-c56a-4953-807f-6a51efc91b09" containerName="installer" Mar 19 09:32:00.766565 master-0 kubenswrapper[7385]: I0319 09:32:00.766490 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5780efa-c56a-4953-807f-6a51efc91b09" containerName="installer" Mar 19 09:32:00.766565 master-0 kubenswrapper[7385]: E0319 09:32:00.766519 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerName="installer" Mar 19 09:32:00.766565 master-0 kubenswrapper[7385]: I0319 09:32:00.766527 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerName="installer" Mar 19 09:32:00.766749 master-0 kubenswrapper[7385]: I0319 09:32:00.766641 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5780efa-c56a-4953-807f-6a51efc91b09" containerName="installer" Mar 19 09:32:00.766749 master-0 kubenswrapper[7385]: I0319 09:32:00.766661 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerName="installer" Mar 19 09:32:00.767295 master-0 kubenswrapper[7385]: I0319 09:32:00.767197 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.769067 master-0 kubenswrapper[7385]: I0319 09:32:00.768929 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-444qc" Mar 19 09:32:00.769612 master-0 kubenswrapper[7385]: I0319 09:32:00.769562 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 19 09:32:00.776910 master-0 kubenswrapper[7385]: I0319 09:32:00.776869 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 19 09:32:00.833796 master-0 kubenswrapper[7385]: I0319 09:32:00.833684 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.833796 master-0 kubenswrapper[7385]: I0319 09:32:00.833754 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.833796 master-0 kubenswrapper[7385]: I0319 09:32:00.833774 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-var-lock\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.934533 master-0 kubenswrapper[7385]: I0319 09:32:00.934462 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.934762 master-0 kubenswrapper[7385]: I0319 09:32:00.934672 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.934762 master-0 kubenswrapper[7385]: I0319 09:32:00.934698 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-var-lock\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.934865 master-0 kubenswrapper[7385]: I0319 09:32:00.934826 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:00.934903 master-0 kubenswrapper[7385]: I0319 09:32:00.934857 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-var-lock\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:01.731269 master-0 kubenswrapper[7385]: I0319 09:32:01.731154 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:01.731269 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:01.731269 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:01.731269 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:01.732141 master-0 kubenswrapper[7385]: I0319 09:32:01.731317 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:02.072716 master-0 kubenswrapper[7385]: I0319 09:32:02.072528 7385 generic.go:334] "Generic (PLEG): container finished" podID="70e8c62b-97c3-4c0c-85d3-f660118831fd" containerID="94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325" exitCode=0 Mar 19 09:32:02.072716 master-0 kubenswrapper[7385]: I0319 09:32:02.072653 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerDied","Data":"94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325"} Mar 19 09:32:02.072716 master-0 kubenswrapper[7385]: I0319 09:32:02.072699 7385 scope.go:117] "RemoveContainer" containerID="3091cd39c91635e4ee1ea702b34d340a7966feb6a8a53ede843ba60081ff82bc" Mar 19 09:32:02.073376 master-0 kubenswrapper[7385]: I0319 09:32:02.073339 7385 scope.go:117] "RemoveContainer" containerID="94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325" Mar 19 09:32:02.073696 master-0 kubenswrapper[7385]: E0319 09:32:02.073651 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-h4zrl_openshift-insights(70e8c62b-97c3-4c0c-85d3-f660118831fd)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" podUID="70e8c62b-97c3-4c0c-85d3-f660118831fd" Mar 19 09:32:02.731817 master-0 kubenswrapper[7385]: I0319 09:32:02.731775 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:02.731817 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:02.731817 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:02.731817 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:02.732400 master-0 kubenswrapper[7385]: I0319 09:32:02.732375 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:03.731012 master-0 kubenswrapper[7385]: I0319 09:32:03.730942 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:03.731012 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:03.731012 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:03.731012 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:03.731283 master-0 kubenswrapper[7385]: I0319 09:32:03.731046 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:04.730636 master-0 kubenswrapper[7385]: I0319 09:32:04.730588 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:04.730636 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:04.730636 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:04.730636 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:04.731349 master-0 kubenswrapper[7385]: I0319 09:32:04.731319 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:05.730649 master-0 kubenswrapper[7385]: I0319 09:32:05.730403 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:05.730649 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:05.730649 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:05.730649 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:05.730649 master-0 kubenswrapper[7385]: I0319 09:32:05.730468 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:06.100042 master-0 kubenswrapper[7385]: I0319 09:32:06.099986 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/4.log" Mar 19 09:32:06.100621 master-0 kubenswrapper[7385]: I0319 09:32:06.100589 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/3.log" Mar 19 09:32:06.101019 master-0 kubenswrapper[7385]: I0319 09:32:06.100963 7385 generic.go:334] "Generic (PLEG): container finished" podID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" exitCode=1 Mar 19 09:32:06.101019 master-0 kubenswrapper[7385]: I0319 09:32:06.101006 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerDied","Data":"1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7"} Mar 19 09:32:06.101241 master-0 kubenswrapper[7385]: I0319 09:32:06.101042 7385 scope.go:117] "RemoveContainer" containerID="1414f38cfa330b290349e45e82446396a92d9531c7778dff1922986963347982" Mar 19 09:32:06.101897 master-0 kubenswrapper[7385]: I0319 09:32:06.101831 7385 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:32:06.102392 master-0 kubenswrapper[7385]: E0319 09:32:06.102293 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:32:06.732051 master-0 kubenswrapper[7385]: I0319 09:32:06.731988 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:06.732051 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:06.732051 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:06.732051 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:06.732596 master-0 kubenswrapper[7385]: I0319 09:32:06.732070 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:07.108798 master-0 kubenswrapper[7385]: I0319 09:32:07.108720 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/4.log" Mar 19 09:32:07.731002 master-0 kubenswrapper[7385]: I0319 09:32:07.730944 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:07.731002 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:07.731002 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:07.731002 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:07.731369 master-0 kubenswrapper[7385]: I0319 09:32:07.731307 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:08.530689 master-0 kubenswrapper[7385]: I0319 09:32:08.530658 7385 scope.go:117] "RemoveContainer" containerID="d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" Mar 19 09:32:08.531404 master-0 kubenswrapper[7385]: E0319 09:32:08.531384 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(67658b93f6f5927402b87ec35623e46e)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" Mar 19 09:32:08.730881 master-0 kubenswrapper[7385]: I0319 09:32:08.730833 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:08.730881 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:08.730881 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:08.730881 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:08.731229 master-0 kubenswrapper[7385]: I0319 09:32:08.731200 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:09.730902 master-0 kubenswrapper[7385]: I0319 09:32:09.730851 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:09.730902 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:09.730902 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:09.730902 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:09.731590 master-0 kubenswrapper[7385]: I0319 09:32:09.730919 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:10.731144 master-0 kubenswrapper[7385]: I0319 09:32:10.731088 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:10.731144 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:10.731144 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:10.731144 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:10.731862 master-0 kubenswrapper[7385]: I0319 09:32:10.731154 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:11.731722 master-0 kubenswrapper[7385]: I0319 09:32:11.731612 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:11.731722 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:11.731722 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:11.731722 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:11.731722 master-0 kubenswrapper[7385]: I0319 09:32:11.731701 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:12.730824 master-0 kubenswrapper[7385]: I0319 09:32:12.730730 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:12.730824 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:12.730824 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:12.730824 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:12.730824 master-0 kubenswrapper[7385]: I0319 09:32:12.730791 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:13.730974 master-0 kubenswrapper[7385]: I0319 09:32:13.730893 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:13.730974 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:13.730974 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:13.730974 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:13.732179 master-0 kubenswrapper[7385]: I0319 09:32:13.730986 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:14.730501 master-0 kubenswrapper[7385]: I0319 09:32:14.730404 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:14.730501 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:14.730501 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:14.730501 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:14.731733 master-0 kubenswrapper[7385]: I0319 09:32:14.730503 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:15.530794 master-0 kubenswrapper[7385]: I0319 09:32:15.530734 7385 scope.go:117] "RemoveContainer" containerID="2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4" Mar 19 09:32:15.731533 master-0 kubenswrapper[7385]: I0319 09:32:15.731443 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:15.731533 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:15.731533 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:15.731533 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:15.732196 master-0 kubenswrapper[7385]: I0319 09:32:15.731561 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:16.201807 master-0 kubenswrapper[7385]: I0319 09:32:16.201774 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/3.log" Mar 19 09:32:16.202193 master-0 kubenswrapper[7385]: I0319 09:32:16.202169 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" event={"ID":"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b","Type":"ContainerStarted","Data":"ee5be95e7999889167077b32cc3e7622f8b7639519cd837962db9cf1300662ee"} Mar 19 09:32:16.248196 master-0 kubenswrapper[7385]: E0319 09:32:16.247835 7385 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 19 09:32:16.530283 master-0 kubenswrapper[7385]: I0319 09:32:16.529914 7385 scope.go:117] "RemoveContainer" containerID="94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325" Mar 19 09:32:16.531028 master-0 kubenswrapper[7385]: E0319 09:32:16.530459 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-h4zrl_openshift-insights(70e8c62b-97c3-4c0c-85d3-f660118831fd)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" podUID="70e8c62b-97c3-4c0c-85d3-f660118831fd" Mar 19 09:32:16.731667 master-0 kubenswrapper[7385]: I0319 09:32:16.731613 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:16.731667 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:16.731667 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:16.731667 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:16.732714 master-0 kubenswrapper[7385]: I0319 09:32:16.732645 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:17.731953 master-0 kubenswrapper[7385]: I0319 09:32:17.731844 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:17.731953 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:17.731953 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:17.731953 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:17.733122 master-0 kubenswrapper[7385]: I0319 09:32:17.731946 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:18.731985 master-0 kubenswrapper[7385]: I0319 09:32:18.731916 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:18.731985 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:18.731985 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:18.731985 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:18.733261 master-0 kubenswrapper[7385]: I0319 09:32:18.732009 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:19.530733 master-0 kubenswrapper[7385]: I0319 09:32:19.530321 7385 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:32:19.530733 master-0 kubenswrapper[7385]: E0319 09:32:19.530608 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:32:19.732317 master-0 kubenswrapper[7385]: I0319 09:32:19.732245 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:19.732317 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:19.732317 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:19.732317 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:19.732947 master-0 kubenswrapper[7385]: I0319 09:32:19.732326 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:20.732511 master-0 kubenswrapper[7385]: I0319 09:32:20.732437 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:20.732511 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:20.732511 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:20.732511 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:20.732511 master-0 kubenswrapper[7385]: I0319 09:32:20.732506 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:21.192157 master-0 kubenswrapper[7385]: I0319 09:32:21.192098 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 19 09:32:21.193055 master-0 kubenswrapper[7385]: I0319 09:32:21.192983 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.195610 master-0 kubenswrapper[7385]: I0319 09:32:21.195560 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 09:32:21.199047 master-0 kubenswrapper[7385]: I0319 09:32:21.198984 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-j48rl" Mar 19 09:32:21.209261 master-0 kubenswrapper[7385]: I0319 09:32:21.209183 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 19 09:32:21.327359 master-0 kubenswrapper[7385]: I0319 09:32:21.327304 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.328011 master-0 kubenswrapper[7385]: I0319 09:32:21.327975 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.328234 master-0 kubenswrapper[7385]: I0319 09:32:21.328205 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.430106 master-0 kubenswrapper[7385]: I0319 09:32:21.430014 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.430106 master-0 kubenswrapper[7385]: I0319 09:32:21.430083 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.430106 master-0 kubenswrapper[7385]: I0319 09:32:21.430096 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.431259 master-0 kubenswrapper[7385]: I0319 09:32:21.430324 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.431259 master-0 kubenswrapper[7385]: I0319 09:32:21.430404 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:21.731898 master-0 kubenswrapper[7385]: I0319 09:32:21.731833 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:21.731898 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:21.731898 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:21.731898 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:21.731898 master-0 kubenswrapper[7385]: I0319 09:32:21.731894 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:22.239405 master-0 kubenswrapper[7385]: I0319 09:32:22.239358 7385 generic.go:334] "Generic (PLEG): container finished" podID="70258988-8374-4aee-aaa2-be3c2e853062" containerID="3e4b6d4a6ba7dc16d944e3b9eee5d338268651e600b3b4017cd71ee472e3564c" exitCode=0 Mar 19 09:32:22.239405 master-0 kubenswrapper[7385]: I0319 09:32:22.239401 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" event={"ID":"70258988-8374-4aee-aaa2-be3c2e853062","Type":"ContainerDied","Data":"3e4b6d4a6ba7dc16d944e3b9eee5d338268651e600b3b4017cd71ee472e3564c"} Mar 19 09:32:22.239930 master-0 kubenswrapper[7385]: I0319 09:32:22.239433 7385 scope.go:117] "RemoveContainer" containerID="48e3bb33c4cfc2acfda10baf096f5ef90778cf5f988e45ef005dd24496a67e52" Mar 19 09:32:22.239930 master-0 kubenswrapper[7385]: I0319 09:32:22.239924 7385 scope.go:117] "RemoveContainer" containerID="3e4b6d4a6ba7dc16d944e3b9eee5d338268651e600b3b4017cd71ee472e3564c" Mar 19 09:32:22.530212 master-0 kubenswrapper[7385]: I0319 09:32:22.530091 7385 scope.go:117] "RemoveContainer" containerID="d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" Mar 19 09:32:22.530941 master-0 kubenswrapper[7385]: E0319 09:32:22.530898 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(67658b93f6f5927402b87ec35623e46e)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" Mar 19 09:32:22.732190 master-0 kubenswrapper[7385]: I0319 09:32:22.732112 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:22.732190 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:22.732190 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:22.732190 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:22.732514 master-0 kubenswrapper[7385]: I0319 09:32:22.732222 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:23.248292 master-0 kubenswrapper[7385]: I0319 09:32:23.248217 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" event={"ID":"70258988-8374-4aee-aaa2-be3c2e853062","Type":"ContainerStarted","Data":"589a248fa6c416cfa1281fd8a7b4269c22b9a7ae5a3b938a2fab4b90a53f9237"} Mar 19 09:32:23.731526 master-0 kubenswrapper[7385]: I0319 09:32:23.731424 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:23.731526 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:23.731526 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:23.731526 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:23.731526 master-0 kubenswrapper[7385]: I0319 09:32:23.731516 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:24.259472 master-0 kubenswrapper[7385]: I0319 09:32:24.259417 7385 generic.go:334] "Generic (PLEG): container finished" podID="525b41b5-82d8-4d47-8350-79644a2c9360" containerID="70d174fd4e01098348af77daa0e495ddb88708e136a02b054e3fa91916dd11b3" exitCode=0 Mar 19 09:32:24.260108 master-0 kubenswrapper[7385]: I0319 09:32:24.259467 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" event={"ID":"525b41b5-82d8-4d47-8350-79644a2c9360","Type":"ContainerDied","Data":"70d174fd4e01098348af77daa0e495ddb88708e136a02b054e3fa91916dd11b3"} Mar 19 09:32:24.260108 master-0 kubenswrapper[7385]: I0319 09:32:24.259520 7385 scope.go:117] "RemoveContainer" containerID="24b10bdbe30c7b6a34e02317c7a4fad144a2b0ece63d82300dc1de99318fd6fe" Mar 19 09:32:24.260278 master-0 kubenswrapper[7385]: I0319 09:32:24.260243 7385 scope.go:117] "RemoveContainer" containerID="70d174fd4e01098348af77daa0e495ddb88708e136a02b054e3fa91916dd11b3" Mar 19 09:32:24.731299 master-0 kubenswrapper[7385]: I0319 09:32:24.731231 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:24.731299 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:24.731299 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:24.731299 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:24.731572 master-0 kubenswrapper[7385]: I0319 09:32:24.731302 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:25.268206 master-0 kubenswrapper[7385]: I0319 09:32:25.268057 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" event={"ID":"525b41b5-82d8-4d47-8350-79644a2c9360","Type":"ContainerStarted","Data":"32d9b64d57addc5acd14127964a8c6b0865e7477cea9fcceaceaa60c09c97ee9"} Mar 19 09:32:25.732091 master-0 kubenswrapper[7385]: I0319 09:32:25.732028 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:25.732091 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:25.732091 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:25.732091 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:25.732624 master-0 kubenswrapper[7385]: I0319 09:32:25.732095 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:26.730937 master-0 kubenswrapper[7385]: I0319 09:32:26.730874 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:26.730937 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:26.730937 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:26.730937 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:26.731787 master-0 kubenswrapper[7385]: I0319 09:32:26.730959 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:26.936863 master-0 kubenswrapper[7385]: I0319 09:32:26.936805 7385 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 19 09:32:26.937737 master-0 kubenswrapper[7385]: I0319 09:32:26.937704 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:26.939251 master-0 kubenswrapper[7385]: I0319 09:32:26.939228 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cnb44" Mar 19 09:32:26.939714 master-0 kubenswrapper[7385]: I0319 09:32:26.939691 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 09:32:26.945939 master-0 kubenswrapper[7385]: I0319 09:32:26.945883 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 19 09:32:27.118076 master-0 kubenswrapper[7385]: I0319 09:32:27.117995 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kube-api-access\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.118407 master-0 kubenswrapper[7385]: I0319 09:32:27.118208 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.118407 master-0 kubenswrapper[7385]: I0319 09:32:27.118282 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-var-lock\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.219911 master-0 kubenswrapper[7385]: I0319 09:32:27.219854 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.220191 master-0 kubenswrapper[7385]: I0319 09:32:27.219938 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.220191 master-0 kubenswrapper[7385]: I0319 09:32:27.219964 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-var-lock\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.220191 master-0 kubenswrapper[7385]: I0319 09:32:27.220001 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kube-api-access\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.220191 master-0 kubenswrapper[7385]: I0319 09:32:27.220093 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-var-lock\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:27.731132 master-0 kubenswrapper[7385]: I0319 09:32:27.731065 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:27.731132 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:27.731132 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:27.731132 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:27.731709 master-0 kubenswrapper[7385]: I0319 09:32:27.731151 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:28.291683 master-0 kubenswrapper[7385]: I0319 09:32:28.291636 7385 generic.go:334] "Generic (PLEG): container finished" podID="53bff8e4-bf60-4386-8905-49d43fd6c420" containerID="63daec6a7a54ee857885e15f0afbbf6fb5689d16eaffe329ad8c85a73d06000a" exitCode=0 Mar 19 09:32:28.291971 master-0 kubenswrapper[7385]: I0319 09:32:28.291727 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" event={"ID":"53bff8e4-bf60-4386-8905-49d43fd6c420","Type":"ContainerDied","Data":"63daec6a7a54ee857885e15f0afbbf6fb5689d16eaffe329ad8c85a73d06000a"} Mar 19 09:32:28.292083 master-0 kubenswrapper[7385]: I0319 09:32:28.292071 7385 scope.go:117] "RemoveContainer" containerID="3c8b4e82c1555c09e55296bfca35644f6006a9bed8037eabe78692b05714698a" Mar 19 09:32:28.292704 master-0 kubenswrapper[7385]: I0319 09:32:28.292674 7385 scope.go:117] "RemoveContainer" containerID="63daec6a7a54ee857885e15f0afbbf6fb5689d16eaffe329ad8c85a73d06000a" Mar 19 09:32:28.296027 master-0 kubenswrapper[7385]: I0319 09:32:28.295800 7385 generic.go:334] "Generic (PLEG): container finished" podID="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" containerID="c28fd5198d7f8466f8d4a9327cbc9eb5d80742ce9844b91bf8ba1a1a20dc6eae" exitCode=0 Mar 19 09:32:28.296027 master-0 kubenswrapper[7385]: I0319 09:32:28.295887 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" event={"ID":"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4","Type":"ContainerDied","Data":"c28fd5198d7f8466f8d4a9327cbc9eb5d80742ce9844b91bf8ba1a1a20dc6eae"} Mar 19 09:32:28.296944 master-0 kubenswrapper[7385]: I0319 09:32:28.296457 7385 scope.go:117] "RemoveContainer" containerID="c28fd5198d7f8466f8d4a9327cbc9eb5d80742ce9844b91bf8ba1a1a20dc6eae" Mar 19 09:32:28.325209 master-0 kubenswrapper[7385]: I0319 09:32:28.325150 7385 scope.go:117] "RemoveContainer" containerID="13d37b6e0fd525b422b8c24e6c520e3e647d99050d3e3d8fce7cd4856511e27f" Mar 19 09:32:28.532507 master-0 kubenswrapper[7385]: I0319 09:32:28.532460 7385 scope.go:117] "RemoveContainer" containerID="94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325" Mar 19 09:32:28.532738 master-0 kubenswrapper[7385]: E0319 09:32:28.532700 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-h4zrl_openshift-insights(70e8c62b-97c3-4c0c-85d3-f660118831fd)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" podUID="70e8c62b-97c3-4c0c-85d3-f660118831fd" Mar 19 09:32:28.732899 master-0 kubenswrapper[7385]: I0319 09:32:28.732711 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:28.732899 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:28.732899 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:28.732899 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:28.734252 master-0 kubenswrapper[7385]: I0319 09:32:28.734166 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:29.305037 master-0 kubenswrapper[7385]: I0319 09:32:29.304992 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" event={"ID":"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4","Type":"ContainerStarted","Data":"96e4215c44905016f471e4d734ce9f51a4db57834ce1e7804c517a0a15960c74"} Mar 19 09:32:29.308320 master-0 kubenswrapper[7385]: I0319 09:32:29.308288 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" event={"ID":"53bff8e4-bf60-4386-8905-49d43fd6c420","Type":"ContainerStarted","Data":"553a1e09d51690a0f64c46774bd49cf40b65d55341b4452211b8d0490263316d"} Mar 19 09:32:29.731143 master-0 kubenswrapper[7385]: I0319 09:32:29.730978 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:29.731143 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:29.731143 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:29.731143 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:29.731791 master-0 kubenswrapper[7385]: I0319 09:32:29.731170 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:30.319050 master-0 kubenswrapper[7385]: I0319 09:32:30.318991 7385 generic.go:334] "Generic (PLEG): container finished" podID="a67ae8dc-240d-4708-9139-1d49c601e552" containerID="b5a43433ad01d4c8d725deb00c57fbbcb1186578ae1700355cef7f732ced844c" exitCode=0 Mar 19 09:32:30.319568 master-0 kubenswrapper[7385]: I0319 09:32:30.319076 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" event={"ID":"a67ae8dc-240d-4708-9139-1d49c601e552","Type":"ContainerDied","Data":"b5a43433ad01d4c8d725deb00c57fbbcb1186578ae1700355cef7f732ced844c"} Mar 19 09:32:30.319568 master-0 kubenswrapper[7385]: I0319 09:32:30.319208 7385 scope.go:117] "RemoveContainer" containerID="69c48f90f075a2cd2e8836a6c9cf1524c6d05160f72475eb6e7ea35e49cf68db" Mar 19 09:32:30.320357 master-0 kubenswrapper[7385]: I0319 09:32:30.320293 7385 scope.go:117] "RemoveContainer" containerID="b5a43433ad01d4c8d725deb00c57fbbcb1186578ae1700355cef7f732ced844c" Mar 19 09:32:30.321579 master-0 kubenswrapper[7385]: I0319 09:32:30.321526 7385 generic.go:334] "Generic (PLEG): container finished" podID="c222998f-6211-4466-8ad7-5d9fcfb10789" containerID="9f898450aabd10f55a00aca1216b3ea60aa3a67621f1566bfc4bf787f1440f93" exitCode=0 Mar 19 09:32:30.321637 master-0 kubenswrapper[7385]: I0319 09:32:30.321572 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" event={"ID":"c222998f-6211-4466-8ad7-5d9fcfb10789","Type":"ContainerDied","Data":"9f898450aabd10f55a00aca1216b3ea60aa3a67621f1566bfc4bf787f1440f93"} Mar 19 09:32:30.322505 master-0 kubenswrapper[7385]: I0319 09:32:30.322470 7385 scope.go:117] "RemoveContainer" containerID="9f898450aabd10f55a00aca1216b3ea60aa3a67621f1566bfc4bf787f1440f93" Mar 19 09:32:30.324720 master-0 kubenswrapper[7385]: I0319 09:32:30.324654 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-2k7c5_d66c30b6-67ad-4864-8b51-0424d462ac98/openshift-config-operator/1.log" Mar 19 09:32:30.325459 master-0 kubenswrapper[7385]: I0319 09:32:30.325371 7385 generic.go:334] "Generic (PLEG): container finished" podID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerID="a42491788debafa4b5caebd582505d3e959b8406cff2a3c8d4b9e3e0ecd564e8" exitCode=0 Mar 19 09:32:30.325745 master-0 kubenswrapper[7385]: I0319 09:32:30.325707 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerDied","Data":"a42491788debafa4b5caebd582505d3e959b8406cff2a3c8d4b9e3e0ecd564e8"} Mar 19 09:32:30.326657 master-0 kubenswrapper[7385]: I0319 09:32:30.326596 7385 scope.go:117] "RemoveContainer" containerID="a42491788debafa4b5caebd582505d3e959b8406cff2a3c8d4b9e3e0ecd564e8" Mar 19 09:32:30.331392 master-0 kubenswrapper[7385]: I0319 09:32:30.331335 7385 generic.go:334] "Generic (PLEG): container finished" podID="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" containerID="121fbce462a7eafb62e39e83f1f28d2288860d27710d3e9a06350c53d4d1dd76" exitCode=0 Mar 19 09:32:30.331494 master-0 kubenswrapper[7385]: I0319 09:32:30.331445 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" event={"ID":"012cdc1d-ebc8-431e-9a52-9a39de95dd0d","Type":"ContainerDied","Data":"121fbce462a7eafb62e39e83f1f28d2288860d27710d3e9a06350c53d4d1dd76"} Mar 19 09:32:30.332034 master-0 kubenswrapper[7385]: I0319 09:32:30.331994 7385 scope.go:117] "RemoveContainer" containerID="121fbce462a7eafb62e39e83f1f28d2288860d27710d3e9a06350c53d4d1dd76" Mar 19 09:32:30.334257 master-0 kubenswrapper[7385]: I0319 09:32:30.334208 7385 generic.go:334] "Generic (PLEG): container finished" podID="43fca1a4-4fa7-4a43-b9c4-7f50a8737643" containerID="ea1f7d359b6ee07950af03d5716d56f99f195491d0e7434e7ef9e53aca7d8ce6" exitCode=0 Mar 19 09:32:30.334361 master-0 kubenswrapper[7385]: I0319 09:32:30.334280 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" event={"ID":"43fca1a4-4fa7-4a43-b9c4-7f50a8737643","Type":"ContainerDied","Data":"ea1f7d359b6ee07950af03d5716d56f99f195491d0e7434e7ef9e53aca7d8ce6"} Mar 19 09:32:30.334659 master-0 kubenswrapper[7385]: I0319 09:32:30.334614 7385 scope.go:117] "RemoveContainer" containerID="ea1f7d359b6ee07950af03d5716d56f99f195491d0e7434e7ef9e53aca7d8ce6" Mar 19 09:32:30.337676 master-0 kubenswrapper[7385]: I0319 09:32:30.337614 7385 generic.go:334] "Generic (PLEG): container finished" podID="fe1881fb-c670-442a-a092-c1eee6b7d5e5" containerID="f4bffeec1cd2a6c9d1bd3d0557a50165f71cd47937001ed7d994ee96e6f4f2fd" exitCode=0 Mar 19 09:32:30.337761 master-0 kubenswrapper[7385]: I0319 09:32:30.337710 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" event={"ID":"fe1881fb-c670-442a-a092-c1eee6b7d5e5","Type":"ContainerDied","Data":"f4bffeec1cd2a6c9d1bd3d0557a50165f71cd47937001ed7d994ee96e6f4f2fd"} Mar 19 09:32:30.338671 master-0 kubenswrapper[7385]: I0319 09:32:30.338629 7385 scope.go:117] "RemoveContainer" containerID="f4bffeec1cd2a6c9d1bd3d0557a50165f71cd47937001ed7d994ee96e6f4f2fd" Mar 19 09:32:30.340087 master-0 kubenswrapper[7385]: I0319 09:32:30.340046 7385 generic.go:334] "Generic (PLEG): container finished" podID="fed75514-8f48-40b7-9fed-0afd6042cfbf" containerID="fb94cc236c27d9ae2255663fca024f5b90148e514af1cb8c7ed1eaef28fc1582" exitCode=0 Mar 19 09:32:30.340087 master-0 kubenswrapper[7385]: I0319 09:32:30.340074 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" event={"ID":"fed75514-8f48-40b7-9fed-0afd6042cfbf","Type":"ContainerDied","Data":"fb94cc236c27d9ae2255663fca024f5b90148e514af1cb8c7ed1eaef28fc1582"} Mar 19 09:32:30.340470 master-0 kubenswrapper[7385]: I0319 09:32:30.340430 7385 scope.go:117] "RemoveContainer" containerID="fb94cc236c27d9ae2255663fca024f5b90148e514af1cb8c7ed1eaef28fc1582" Mar 19 09:32:30.342140 master-0 kubenswrapper[7385]: I0319 09:32:30.342096 7385 generic.go:334] "Generic (PLEG): container finished" podID="ded5da9a-1447-46df-a8ff-ffd469562599" containerID="5c0d59f8ce099c748a661f116e21ac9ceeb2f5758bc6d56b40e89d6cb4480b2d" exitCode=0 Mar 19 09:32:30.342206 master-0 kubenswrapper[7385]: I0319 09:32:30.342156 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" event={"ID":"ded5da9a-1447-46df-a8ff-ffd469562599","Type":"ContainerDied","Data":"5c0d59f8ce099c748a661f116e21ac9ceeb2f5758bc6d56b40e89d6cb4480b2d"} Mar 19 09:32:30.348028 master-0 kubenswrapper[7385]: I0319 09:32:30.347657 7385 scope.go:117] "RemoveContainer" containerID="5c0d59f8ce099c748a661f116e21ac9ceeb2f5758bc6d56b40e89d6cb4480b2d" Mar 19 09:32:30.349156 master-0 kubenswrapper[7385]: I0319 09:32:30.349110 7385 generic.go:334] "Generic (PLEG): container finished" podID="67e5534b-f428-45cf-b54e-d06b25dc3e09" containerID="c9951e834eac9fa8b70d5e1fa9bb37afc3d9012f0b6806bedca4371ec18ecd3e" exitCode=0 Mar 19 09:32:30.349260 master-0 kubenswrapper[7385]: I0319 09:32:30.349188 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" event={"ID":"67e5534b-f428-45cf-b54e-d06b25dc3e09","Type":"ContainerDied","Data":"c9951e834eac9fa8b70d5e1fa9bb37afc3d9012f0b6806bedca4371ec18ecd3e"} Mar 19 09:32:30.349647 master-0 kubenswrapper[7385]: I0319 09:32:30.349568 7385 scope.go:117] "RemoveContainer" containerID="c9951e834eac9fa8b70d5e1fa9bb37afc3d9012f0b6806bedca4371ec18ecd3e" Mar 19 09:32:30.351387 master-0 kubenswrapper[7385]: I0319 09:32:30.351345 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-8mpp9_a57648b5-1a08-49a7-bedb-f7c1e54d92b4/cluster-node-tuning-operator/0.log" Mar 19 09:32:30.351495 master-0 kubenswrapper[7385]: I0319 09:32:30.351400 7385 generic.go:334] "Generic (PLEG): container finished" podID="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" containerID="8877b45464c5376d1635f878edec2b26c0ed093e8a5de4899f80eaf0d08390b4" exitCode=1 Mar 19 09:32:30.351495 master-0 kubenswrapper[7385]: I0319 09:32:30.351469 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" event={"ID":"a57648b5-1a08-49a7-bedb-f7c1e54d92b4","Type":"ContainerDied","Data":"8877b45464c5376d1635f878edec2b26c0ed093e8a5de4899f80eaf0d08390b4"} Mar 19 09:32:30.352379 master-0 kubenswrapper[7385]: I0319 09:32:30.352279 7385 scope.go:117] "RemoveContainer" containerID="8877b45464c5376d1635f878edec2b26c0ed093e8a5de4899f80eaf0d08390b4" Mar 19 09:32:30.353381 master-0 kubenswrapper[7385]: I0319 09:32:30.353347 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-p4hvm_7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/cluster-autoscaler-operator/0.log" Mar 19 09:32:30.353976 master-0 kubenswrapper[7385]: I0319 09:32:30.353844 7385 generic.go:334] "Generic (PLEG): container finished" podID="7825a2ac-eab6-4988-861a-9e3bfdf5dcc8" containerID="d53ad972361319c74f326b3096df26b027816cd81f61b8b72dac0988e8a98e3b" exitCode=255 Mar 19 09:32:30.353976 master-0 kubenswrapper[7385]: I0319 09:32:30.353913 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" event={"ID":"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8","Type":"ContainerDied","Data":"d53ad972361319c74f326b3096df26b027816cd81f61b8b72dac0988e8a98e3b"} Mar 19 09:32:30.354458 master-0 kubenswrapper[7385]: I0319 09:32:30.354379 7385 scope.go:117] "RemoveContainer" containerID="d53ad972361319c74f326b3096df26b027816cd81f61b8b72dac0988e8a98e3b" Mar 19 09:32:30.357810 master-0 kubenswrapper[7385]: I0319 09:32:30.357751 7385 generic.go:334] "Generic (PLEG): container finished" podID="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" containerID="685e4b432ade20b1c50ec1b3266543948892457d2831f66c3796f3777b544a6e" exitCode=0 Mar 19 09:32:30.357895 master-0 kubenswrapper[7385]: I0319 09:32:30.357837 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" event={"ID":"ca2f7cb3-8812-4fe3-83a5-61668ef87f99","Type":"ContainerDied","Data":"685e4b432ade20b1c50ec1b3266543948892457d2831f66c3796f3777b544a6e"} Mar 19 09:32:30.358229 master-0 kubenswrapper[7385]: I0319 09:32:30.358201 7385 scope.go:117] "RemoveContainer" containerID="685e4b432ade20b1c50ec1b3266543948892457d2831f66c3796f3777b544a6e" Mar 19 09:32:30.360213 master-0 kubenswrapper[7385]: I0319 09:32:30.360178 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-52j2b_e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/package-server-manager/0.log" Mar 19 09:32:30.360699 master-0 kubenswrapper[7385]: I0319 09:32:30.360582 7385 generic.go:334] "Generic (PLEG): container finished" podID="e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" containerID="8ba7304329f0a0ad38a3e444273ac007e5708e5106ec4cfd0157e01f42d39e4e" exitCode=1 Mar 19 09:32:30.360808 master-0 kubenswrapper[7385]: I0319 09:32:30.360666 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" event={"ID":"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc","Type":"ContainerDied","Data":"8ba7304329f0a0ad38a3e444273ac007e5708e5106ec4cfd0157e01f42d39e4e"} Mar 19 09:32:30.362090 master-0 kubenswrapper[7385]: I0319 09:32:30.362057 7385 scope.go:117] "RemoveContainer" containerID="8ba7304329f0a0ad38a3e444273ac007e5708e5106ec4cfd0157e01f42d39e4e" Mar 19 09:32:30.363132 master-0 kubenswrapper[7385]: I0319 09:32:30.362974 7385 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2eac-6412-4f38-8272-743c67b218a3" containerID="405f9880ce91d786192d330c1e84c542474ebb205faf0f516cd0ea59e7fb46ac" exitCode=0 Mar 19 09:32:30.363132 master-0 kubenswrapper[7385]: I0319 09:32:30.363071 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" event={"ID":"d6cd2eac-6412-4f38-8272-743c67b218a3","Type":"ContainerDied","Data":"405f9880ce91d786192d330c1e84c542474ebb205faf0f516cd0ea59e7fb46ac"} Mar 19 09:32:30.363987 master-0 kubenswrapper[7385]: I0319 09:32:30.363952 7385 scope.go:117] "RemoveContainer" containerID="405f9880ce91d786192d330c1e84c542474ebb205faf0f516cd0ea59e7fb46ac" Mar 19 09:32:30.368366 master-0 kubenswrapper[7385]: I0319 09:32:30.368284 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9jbdl_cd1425b9-fcd1-4aba-899f-e110eebce626/machine-api-operator/0.log" Mar 19 09:32:30.377399 master-0 kubenswrapper[7385]: I0319 09:32:30.377352 7385 generic.go:334] "Generic (PLEG): container finished" podID="cd1425b9-fcd1-4aba-899f-e110eebce626" containerID="e58b99f4da3ded2a286482407189e580812fbd5fde61313a0d8876d046001408" exitCode=255 Mar 19 09:32:30.377514 master-0 kubenswrapper[7385]: I0319 09:32:30.377403 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" event={"ID":"cd1425b9-fcd1-4aba-899f-e110eebce626","Type":"ContainerDied","Data":"e58b99f4da3ded2a286482407189e580812fbd5fde61313a0d8876d046001408"} Mar 19 09:32:30.377992 master-0 kubenswrapper[7385]: I0319 09:32:30.377946 7385 scope.go:117] "RemoveContainer" containerID="e58b99f4da3ded2a286482407189e580812fbd5fde61313a0d8876d046001408" Mar 19 09:32:30.545575 master-0 kubenswrapper[7385]: I0319 09:32:30.541751 7385 scope.go:117] "RemoveContainer" containerID="33a0eabafa6de07993391ffc1dff5fcd967838ca425e080e0901a5f9624f873e" Mar 19 09:32:30.545575 master-0 kubenswrapper[7385]: I0319 09:32:30.542104 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:32:30.671245 master-0 kubenswrapper[7385]: I0319 09:32:30.671210 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:32:30.724779 master-0 kubenswrapper[7385]: I0319 09:32:30.724746 7385 scope.go:117] "RemoveContainer" containerID="7f84fbd703825db689c03d2baee5e05e0406b0c7857947e23dfe9649aed6fbc3" Mar 19 09:32:30.730917 master-0 kubenswrapper[7385]: I0319 09:32:30.729989 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:30.730917 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:30.730917 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:30.730917 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:30.730917 master-0 kubenswrapper[7385]: I0319 09:32:30.730201 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:30.779109 master-0 kubenswrapper[7385]: I0319 09:32:30.778607 7385 scope.go:117] "RemoveContainer" containerID="68fbf6321802565874265d19454cbc64b4b4b521a0e102ded43536ee428b4258" Mar 19 09:32:30.830484 master-0 kubenswrapper[7385]: I0319 09:32:30.830090 7385 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:32:30.894318 master-0 kubenswrapper[7385]: I0319 09:32:30.885236 7385 scope.go:117] "RemoveContainer" containerID="3335c7fc18f5f7e2694a86064d55e2221326f9866ff420531a852d42c29d0c0d" Mar 19 09:32:31.386028 master-0 kubenswrapper[7385]: I0319 09:32:31.385978 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" event={"ID":"ca2f7cb3-8812-4fe3-83a5-61668ef87f99","Type":"ContainerStarted","Data":"b81ad8766bc07b62b8dfbc1b7c0597d9a58eef25a8b3e27a3d66396a449be607"} Mar 19 09:32:31.388709 master-0 kubenswrapper[7385]: I0319 09:32:31.388684 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9jbdl_cd1425b9-fcd1-4aba-899f-e110eebce626/machine-api-operator/0.log" Mar 19 09:32:31.389111 master-0 kubenswrapper[7385]: I0319 09:32:31.389070 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" event={"ID":"cd1425b9-fcd1-4aba-899f-e110eebce626","Type":"ContainerStarted","Data":"a89a9066fc7b3430f0160fa94a06fe20429f9f3c2b297536a5d0a8a1408c1a10"} Mar 19 09:32:31.391233 master-0 kubenswrapper[7385]: I0319 09:32:31.391200 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" event={"ID":"ded5da9a-1447-46df-a8ff-ffd469562599","Type":"ContainerStarted","Data":"1b972e483addc4a6cc89c09ca4dab02c08ae8dc9439ddfbefd77bc0259ca2bfd"} Mar 19 09:32:31.392977 master-0 kubenswrapper[7385]: I0319 09:32:31.392945 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" event={"ID":"43fca1a4-4fa7-4a43-b9c4-7f50a8737643","Type":"ContainerStarted","Data":"6deb2d35d1503e78a8aa45e02da5c364b1c44ca3e33a9f3f2ab37361a4736dcb"} Mar 19 09:32:31.395205 master-0 kubenswrapper[7385]: I0319 09:32:31.395168 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-8mpp9_a57648b5-1a08-49a7-bedb-f7c1e54d92b4/cluster-node-tuning-operator/0.log" Mar 19 09:32:31.395277 master-0 kubenswrapper[7385]: I0319 09:32:31.395249 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" event={"ID":"a57648b5-1a08-49a7-bedb-f7c1e54d92b4","Type":"ContainerStarted","Data":"2111f5d0e4a37e185b61a9f53bbd5b94ae96f8fd68f1387de379d3f04d45fec0"} Mar 19 09:32:31.397838 master-0 kubenswrapper[7385]: I0319 09:32:31.397807 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" event={"ID":"c222998f-6211-4466-8ad7-5d9fcfb10789","Type":"ContainerStarted","Data":"e122d86c994e75502c8ce95eb8fe9b27fb1fff73c39fd9bc72c216acfb3cd7ec"} Mar 19 09:32:31.400216 master-0 kubenswrapper[7385]: I0319 09:32:31.400188 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" event={"ID":"d66c30b6-67ad-4864-8b51-0424d462ac98","Type":"ContainerStarted","Data":"005a4056c4294d03dc295bf5765313c2566fd4bd227720dadf8d10f0eaa585f5"} Mar 19 09:32:31.400601 master-0 kubenswrapper[7385]: I0319 09:32:31.400578 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:32:31.404002 master-0 kubenswrapper[7385]: I0319 09:32:31.403969 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-p4hvm_7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/cluster-autoscaler-operator/0.log" Mar 19 09:32:31.404634 master-0 kubenswrapper[7385]: I0319 09:32:31.404581 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" event={"ID":"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8","Type":"ContainerStarted","Data":"fba07e52b30494b6092df186816c27354e75e3e9fa8be1724e3cc2cf467e11fa"} Mar 19 09:32:31.406920 master-0 kubenswrapper[7385]: I0319 09:32:31.406888 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" event={"ID":"012cdc1d-ebc8-431e-9a52-9a39de95dd0d","Type":"ContainerStarted","Data":"de3427cd80dc0438276bfb3e76e48d6529fd1ffc42d003a65d0216b949cdbfc1"} Mar 19 09:32:31.408363 master-0 kubenswrapper[7385]: I0319 09:32:31.408320 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" event={"ID":"a67ae8dc-240d-4708-9139-1d49c601e552","Type":"ContainerStarted","Data":"e303ec689c5fbeb55089487fee85dbe16ebbbd12d18080c9b609d0a45c16f1aa"} Mar 19 09:32:31.410325 master-0 kubenswrapper[7385]: I0319 09:32:31.410278 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" event={"ID":"d6cd2eac-6412-4f38-8272-743c67b218a3","Type":"ContainerStarted","Data":"173502dd45845c09aae19301971e6c8a3a3c399936bfb0cf46b724cf4e84e620"} Mar 19 09:32:31.412677 master-0 kubenswrapper[7385]: I0319 09:32:31.412650 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" event={"ID":"67e5534b-f428-45cf-b54e-d06b25dc3e09","Type":"ContainerStarted","Data":"7ec687de0cdd32e0257cd92675beea3383db9347f0020cfcb4b087ad99dac744"} Mar 19 09:32:31.414607 master-0 kubenswrapper[7385]: I0319 09:32:31.414512 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" event={"ID":"fe1881fb-c670-442a-a092-c1eee6b7d5e5","Type":"ContainerStarted","Data":"f1fed48c764387ed768fb27d81cf7a7e0c7d1b6cd8696579cbac6d562e01570b"} Mar 19 09:32:31.417642 master-0 kubenswrapper[7385]: I0319 09:32:31.417597 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-52j2b_e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/package-server-manager/0.log" Mar 19 09:32:31.418255 master-0 kubenswrapper[7385]: I0319 09:32:31.418195 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" event={"ID":"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc","Type":"ContainerStarted","Data":"555015ee0b002715b9678ef2dc04dcc0f8d2c0f7f2b93483b801cfe4e8f1e8ae"} Mar 19 09:32:31.418695 master-0 kubenswrapper[7385]: I0319 09:32:31.418673 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:32:31.423561 master-0 kubenswrapper[7385]: I0319 09:32:31.423102 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" event={"ID":"fed75514-8f48-40b7-9fed-0afd6042cfbf","Type":"ContainerStarted","Data":"865da64b54fabb0668bf3de1b2b8d157266ad601077e941b3d1683c8a792e0c9"} Mar 19 09:32:31.730130 master-0 kubenswrapper[7385]: I0319 09:32:31.730015 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:31.730130 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:31.730130 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:31.730130 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:31.730130 master-0 kubenswrapper[7385]: I0319 09:32:31.730066 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:32.731438 master-0 kubenswrapper[7385]: I0319 09:32:32.731321 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:32.731438 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:32.731438 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:32.731438 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:32.731438 master-0 kubenswrapper[7385]: I0319 09:32:32.731412 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:33.447718 master-0 kubenswrapper[7385]: I0319 09:32:33.447661 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:32:33.532388 master-0 kubenswrapper[7385]: I0319 09:32:33.532358 7385 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:32:33.532865 master-0 kubenswrapper[7385]: E0319 09:32:33.532845 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:32:33.742730 master-0 kubenswrapper[7385]: I0319 09:32:33.738349 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:33.742730 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:33.742730 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:33.742730 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:33.742730 master-0 kubenswrapper[7385]: I0319 09:32:33.738420 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:34.731111 master-0 kubenswrapper[7385]: I0319 09:32:34.731061 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:34.731111 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:34.731111 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:34.731111 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:34.731395 master-0 kubenswrapper[7385]: I0319 09:32:34.731141 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:34.938136 master-0 kubenswrapper[7385]: I0319 09:32:34.938099 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 19 09:32:34.947723 master-0 kubenswrapper[7385]: E0319 09:32:34.947644 7385 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-5-master-0: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 09:32:34.947723 master-0 kubenswrapper[7385]: E0319 09:32:34.947712 7385 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access podName:e0ce846a-f7ca-4f96-9bb4-509d084dcec1 nodeName:}" failed. No retries permitted until 2026-03-19 09:32:35.447692525 +0000 UTC m=+851.122122226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access") pod "installer-5-master-0" (UID: "e0ce846a-f7ca-4f96-9bb4-509d084dcec1") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 19 09:32:35.459853 master-0 kubenswrapper[7385]: I0319 09:32:35.459808 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:35.529999 master-0 kubenswrapper[7385]: I0319 09:32:35.529941 7385 scope.go:117] "RemoveContainer" containerID="d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" Mar 19 09:32:35.730377 master-0 kubenswrapper[7385]: I0319 09:32:35.730351 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:35.730377 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:35.730377 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:35.730377 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:35.730658 master-0 kubenswrapper[7385]: I0319 09:32:35.730406 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:36.460704 master-0 kubenswrapper[7385]: I0319 09:32:36.460622 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/3.log" Mar 19 09:32:36.462302 master-0 kubenswrapper[7385]: I0319 09:32:36.462253 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:32:36.462404 master-0 kubenswrapper[7385]: I0319 09:32:36.462332 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"67658b93f6f5927402b87ec35623e46e","Type":"ContainerStarted","Data":"6ef69a9aa568c569e28a8cf9a8398ecd1d39a543a999398bc8742b280aa881bd"} Mar 19 09:32:36.731045 master-0 kubenswrapper[7385]: I0319 09:32:36.730944 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:36.731045 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:36.731045 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:36.731045 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:36.731045 master-0 kubenswrapper[7385]: I0319 09:32:36.731006 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:37.731191 master-0 kubenswrapper[7385]: I0319 09:32:37.731124 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:32:37.731191 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:32:37.731191 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:32:37.731191 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:32:37.731191 master-0 kubenswrapper[7385]: I0319 09:32:37.731205 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:32:37.732058 master-0 kubenswrapper[7385]: I0319 09:32:37.731262 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:32:37.732058 master-0 kubenswrapper[7385]: I0319 09:32:37.731901 7385 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"fd8bb80d426a5da3f781ac199d36ba296827076a405918db4a564ba51e18307a"} pod="openshift-ingress/router-default-7dcf5569b5-k99cg" containerMessage="Container router failed startup probe, will be restarted" Mar 19 09:32:37.732058 master-0 kubenswrapper[7385]: I0319 09:32:37.731943 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" containerID="cri-o://fd8bb80d426a5da3f781ac199d36ba296827076a405918db4a564ba51e18307a" gracePeriod=3600 Mar 19 09:32:42.530782 master-0 kubenswrapper[7385]: I0319 09:32:42.530721 7385 scope.go:117] "RemoveContainer" containerID="94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325" Mar 19 09:32:43.519754 master-0 kubenswrapper[7385]: I0319 09:32:43.519685 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" event={"ID":"70e8c62b-97c3-4c0c-85d3-f660118831fd","Type":"ContainerStarted","Data":"774ade1f590e9f92f8a2420e479445f3da63f1b61d08d7641d6c96bfc06925ad"} Mar 19 09:32:43.991153 master-0 kubenswrapper[7385]: I0319 09:32:43.991081 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:32:43.991767 master-0 kubenswrapper[7385]: I0319 09:32:43.991360 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:32:43.996162 master-0 kubenswrapper[7385]: I0319 09:32:43.996108 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:32:45.530738 master-0 kubenswrapper[7385]: I0319 09:32:45.530687 7385 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:32:45.531741 master-0 kubenswrapper[7385]: E0319 09:32:45.530981 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:32:45.539490 master-0 kubenswrapper[7385]: I0319 09:32:45.539431 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:32:47.120760 master-0 kubenswrapper[7385]: I0319 09:32:47.120701 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access\") pod \"installer-5-master-0\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:47.125513 master-0 kubenswrapper[7385]: I0319 09:32:47.125457 7385 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 09:32:47.131521 master-0 kubenswrapper[7385]: I0319 09:32:47.131480 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kube-api-access\") pod \"installer-5-master-0\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:47.138192 master-0 kubenswrapper[7385]: I0319 09:32:47.138151 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:47.288605 master-0 kubenswrapper[7385]: I0319 09:32:47.288536 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-444qc" Mar 19 09:32:47.296826 master-0 kubenswrapper[7385]: I0319 09:32:47.296781 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:32:47.333449 master-0 kubenswrapper[7385]: I0319 09:32:47.327689 7385 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-j48rl" Mar 19 09:32:47.333449 master-0 kubenswrapper[7385]: I0319 09:32:47.332582 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:32:47.363565 master-0 kubenswrapper[7385]: I0319 09:32:47.363160 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:32:47.725746 master-0 kubenswrapper[7385]: I0319 09:32:47.725701 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 19 09:32:47.817149 master-0 kubenswrapper[7385]: I0319 09:32:47.809848 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 19 09:32:47.817149 master-0 kubenswrapper[7385]: I0319 09:32:47.814209 7385 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 19 09:32:47.819845 master-0 kubenswrapper[7385]: W0319 09:32:47.819784 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod98826625_8de0_4bf7_8926_ec62517369e5.slice/crio-a2c2da2395722223505ad6defea9773e10f1c32ee7ec1b621432372d72816ee7 WatchSource:0}: Error finding container a2c2da2395722223505ad6defea9773e10f1c32ee7ec1b621432372d72816ee7: Status 404 returned error can't find the container with id a2c2da2395722223505ad6defea9773e10f1c32ee7ec1b621432372d72816ee7 Mar 19 09:32:47.821812 master-0 kubenswrapper[7385]: W0319 09:32:47.821780 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc20d34ff_5b2a_4142_802f_57a7a38c5a12.slice/crio-b4944acb6dda035dde270308345019acdc87bd2a81d8b65e1c0a2845a63c510d WatchSource:0}: Error finding container b4944acb6dda035dde270308345019acdc87bd2a81d8b65e1c0a2845a63c510d: Status 404 returned error can't find the container with id b4944acb6dda035dde270308345019acdc87bd2a81d8b65e1c0a2845a63c510d Mar 19 09:32:48.559685 master-0 kubenswrapper[7385]: I0319 09:32:48.559600 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"98826625-8de0-4bf7-8926-ec62517369e5","Type":"ContainerStarted","Data":"47f63f0f88f52262ec4bb448c720e1d131874e1c77a757276ce8eb2d6c24cab5"} Mar 19 09:32:48.559685 master-0 kubenswrapper[7385]: I0319 09:32:48.559684 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"98826625-8de0-4bf7-8926-ec62517369e5","Type":"ContainerStarted","Data":"a2c2da2395722223505ad6defea9773e10f1c32ee7ec1b621432372d72816ee7"} Mar 19 09:32:48.562399 master-0 kubenswrapper[7385]: I0319 09:32:48.562237 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"c20d34ff-5b2a-4142-802f-57a7a38c5a12","Type":"ContainerStarted","Data":"542dee821bce6b00fb4a89381e02478e42521bf4fb5559fd959616b012db8e61"} Mar 19 09:32:48.562399 master-0 kubenswrapper[7385]: I0319 09:32:48.562273 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"c20d34ff-5b2a-4142-802f-57a7a38c5a12","Type":"ContainerStarted","Data":"b4944acb6dda035dde270308345019acdc87bd2a81d8b65e1c0a2845a63c510d"} Mar 19 09:32:48.565282 master-0 kubenswrapper[7385]: I0319 09:32:48.565236 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e0ce846a-f7ca-4f96-9bb4-509d084dcec1","Type":"ContainerStarted","Data":"9b72a735e8178867a7e32af1f6ff03d583d0af440844ffb7c12f63cbd3f26349"} Mar 19 09:32:48.565282 master-0 kubenswrapper[7385]: I0319 09:32:48.565281 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e0ce846a-f7ca-4f96-9bb4-509d084dcec1","Type":"ContainerStarted","Data":"fa6f8cb5d8c6bf0298daad9cbc84db09fdcf39078ac76e6417bed28402a86c24"} Mar 19 09:32:48.582000 master-0 kubenswrapper[7385]: I0319 09:32:48.575672 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=27.575652179 podStartE2EDuration="27.575652179s" podCreationTimestamp="2026-03-19 09:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:32:48.573991814 +0000 UTC m=+864.248421515" watchObservedRunningTime="2026-03-19 09:32:48.575652179 +0000 UTC m=+864.250081870" Mar 19 09:32:48.592713 master-0 kubenswrapper[7385]: I0319 09:32:48.592643 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=48.592627028 podStartE2EDuration="48.592627028s" podCreationTimestamp="2026-03-19 09:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:32:48.590642154 +0000 UTC m=+864.265071865" watchObservedRunningTime="2026-03-19 09:32:48.592627028 +0000 UTC m=+864.267056729" Mar 19 09:32:48.611123 master-0 kubenswrapper[7385]: I0319 09:32:48.611060 7385 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=22.611041378 podStartE2EDuration="22.611041378s" podCreationTimestamp="2026-03-19 09:32:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:32:48.606887594 +0000 UTC m=+864.281317295" watchObservedRunningTime="2026-03-19 09:32:48.611041378 +0000 UTC m=+864.285471079" Mar 19 09:32:56.531405 master-0 kubenswrapper[7385]: I0319 09:32:56.531349 7385 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:32:56.532100 master-0 kubenswrapper[7385]: E0319 09:32:56.531618 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:33:02.072654 master-0 kubenswrapper[7385]: I0319 09:33:02.072510 7385 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:33:07.530263 master-0 kubenswrapper[7385]: I0319 09:33:07.530180 7385 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:33:07.531026 master-0 kubenswrapper[7385]: E0319 09:33:07.530437 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:33:19.067678 master-0 kubenswrapper[7385]: E0319 09:33:19.067618 7385 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-scheduler-pod.yaml\": /etc/kubernetes/manifests/kube-scheduler-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 19 09:33:19.068623 master-0 kubenswrapper[7385]: I0319 09:33:19.068589 7385 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 09:33:19.069063 master-0 kubenswrapper[7385]: I0319 09:33:19.069029 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 09:33:19.069136 master-0 kubenswrapper[7385]: I0319 09:33:19.069075 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" containerID="cri-o://53fac99b9b6d7113ded13db31c06fb6988d91b7900890060d24517f7c6a3af61" gracePeriod=30 Mar 19 09:33:19.069206 master-0 kubenswrapper[7385]: I0319 09:33:19.069011 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" containerID="cri-o://48d42851ba5e1a1222e1f2eb24f68210235c910ac77423fe9def29b71929e2f4" gracePeriod=30 Mar 19 09:33:19.069318 master-0 kubenswrapper[7385]: I0319 09:33:19.069202 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" containerID="cri-o://95ac7f362ef5d31be76e509ce342250794db8fc83ad49a811e1f5659d7238a79" gracePeriod=30 Mar 19 09:33:19.069414 master-0 kubenswrapper[7385]: E0319 09:33:19.069390 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:33:19.069414 master-0 kubenswrapper[7385]: I0319 09:33:19.069410 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:33:19.069505 master-0 kubenswrapper[7385]: E0319 09:33:19.069459 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 09:33:19.069505 master-0 kubenswrapper[7385]: I0319 09:33:19.069470 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 09:33:19.069505 master-0 kubenswrapper[7385]: E0319 09:33:19.069488 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 09:33:19.069505 master-0 kubenswrapper[7385]: I0319 09:33:19.069496 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 09:33:19.069505 master-0 kubenswrapper[7385]: E0319 09:33:19.069508 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 19 09:33:19.070071 master-0 kubenswrapper[7385]: I0319 09:33:19.069516 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 19 09:33:19.070071 master-0 kubenswrapper[7385]: I0319 09:33:19.069709 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:33:19.070071 master-0 kubenswrapper[7385]: I0319 09:33:19.069723 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:33:19.070071 master-0 kubenswrapper[7385]: I0319 09:33:19.069750 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 09:33:19.070071 master-0 kubenswrapper[7385]: I0319 09:33:19.069762 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 09:33:19.070071 master-0 kubenswrapper[7385]: E0319 09:33:19.069887 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:33:19.070071 master-0 kubenswrapper[7385]: I0319 09:33:19.069895 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:33:19.135741 master-0 kubenswrapper[7385]: I0319 09:33:19.135687 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.135849 master-0 kubenswrapper[7385]: I0319 09:33:19.135799 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.234023 master-0 kubenswrapper[7385]: I0319 09:33:19.233963 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 19 09:33:19.234716 master-0 kubenswrapper[7385]: I0319 09:33:19.234674 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 09:33:19.235096 master-0 kubenswrapper[7385]: I0319 09:33:19.235065 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.237741 master-0 kubenswrapper[7385]: I0319 09:33:19.237709 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.237831 master-0 kubenswrapper[7385]: I0319 09:33:19.237796 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.237904 master-0 kubenswrapper[7385]: I0319 09:33:19.237863 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.237952 master-0 kubenswrapper[7385]: I0319 09:33:19.237936 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.238436 master-0 kubenswrapper[7385]: I0319 09:33:19.238387 7385 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 19 09:33:19.339477 master-0 kubenswrapper[7385]: I0319 09:33:19.339336 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 19 09:33:19.339477 master-0 kubenswrapper[7385]: I0319 09:33:19.339429 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 19 09:33:19.339758 master-0 kubenswrapper[7385]: I0319 09:33:19.339462 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:19.339758 master-0 kubenswrapper[7385]: I0319 09:33:19.339602 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:19.340096 master-0 kubenswrapper[7385]: I0319 09:33:19.340050 7385 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:19.340142 master-0 kubenswrapper[7385]: I0319 09:33:19.340097 7385 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:19.789113 master-0 kubenswrapper[7385]: I0319 09:33:19.789055 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 19 09:33:19.789711 master-0 kubenswrapper[7385]: I0319 09:33:19.789684 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 19 09:33:19.790773 master-0 kubenswrapper[7385]: I0319 09:33:19.790732 7385 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="53fac99b9b6d7113ded13db31c06fb6988d91b7900890060d24517f7c6a3af61" exitCode=0 Mar 19 09:33:19.790773 master-0 kubenswrapper[7385]: I0319 09:33:19.790760 7385 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="95ac7f362ef5d31be76e509ce342250794db8fc83ad49a811e1f5659d7238a79" exitCode=0 Mar 19 09:33:19.790773 master-0 kubenswrapper[7385]: I0319 09:33:19.790768 7385 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="48d42851ba5e1a1222e1f2eb24f68210235c910ac77423fe9def29b71929e2f4" exitCode=2 Mar 19 09:33:19.790895 master-0 kubenswrapper[7385]: I0319 09:33:19.790813 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a67637c3ce9588f542e20565aba89d6f1d4976553a42d7b1a45d6451a0663219" Mar 19 09:33:19.790895 master-0 kubenswrapper[7385]: I0319 09:33:19.790827 7385 scope.go:117] "RemoveContainer" containerID="7414ff0d187efeb091c598330787485add0219e366d0b09f7b817dd18949f28f" Mar 19 09:33:19.790956 master-0 kubenswrapper[7385]: I0319 09:33:19.790912 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:19.797230 master-0 kubenswrapper[7385]: I0319 09:33:19.797056 7385 generic.go:334] "Generic (PLEG): container finished" podID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerID="9b72a735e8178867a7e32af1f6ff03d583d0af440844ffb7c12f63cbd3f26349" exitCode=0 Mar 19 09:33:19.797230 master-0 kubenswrapper[7385]: I0319 09:33:19.797098 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e0ce846a-f7ca-4f96-9bb4-509d084dcec1","Type":"ContainerDied","Data":"9b72a735e8178867a7e32af1f6ff03d583d0af440844ffb7c12f63cbd3f26349"} Mar 19 09:33:19.798558 master-0 kubenswrapper[7385]: I0319 09:33:19.798509 7385 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 19 09:33:19.819737 master-0 kubenswrapper[7385]: I0319 09:33:19.819681 7385 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 19 09:33:20.543869 master-0 kubenswrapper[7385]: I0319 09:33:20.543763 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8413125cf444e5c95f023c5dd9c6151e" path="/var/lib/kubelet/pods/8413125cf444e5c95f023c5dd9c6151e/volumes" Mar 19 09:33:20.675811 master-0 kubenswrapper[7385]: I0319 09:33:20.675748 7385 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:33:20.676058 master-0 kubenswrapper[7385]: I0319 09:33:20.676021 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://9e6e66f3c8a2bf098bd2b9e3696054dca0c7f2ceb70c56ac6de3f703458cdd0e" gracePeriod=30 Mar 19 09:33:20.676188 master-0 kubenswrapper[7385]: I0319 09:33:20.676156 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" containerID="cri-o://6ef69a9aa568c569e28a8cf9a8398ecd1d39a543a999398bc8742b280aa881bd" gracePeriod=30 Mar 19 09:33:20.676238 master-0 kubenswrapper[7385]: I0319 09:33:20.676202 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" containerID="cri-o://33fbab3dae4d95c59279d28953be3dee55bacb9a970231a9a8855ae0fd8f5ddd" gracePeriod=30 Mar 19 09:33:20.676238 master-0 kubenswrapper[7385]: I0319 09:33:20.676235 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://759c350f58d15a9fddc9b4b3e95d92a6db8fdfb3e82a8bd183c2d36ff84c76ed" gracePeriod=30 Mar 19 09:33:20.678097 master-0 kubenswrapper[7385]: I0319 09:33:20.678052 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:33:20.678362 master-0 kubenswrapper[7385]: E0319 09:33:20.678340 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678362 master-0 kubenswrapper[7385]: I0319 09:33:20.678359 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: E0319 09:33:20.678370 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-cert-syncer" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: I0319 09:33:20.678377 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-cert-syncer" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: E0319 09:33:20.678390 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: I0319 09:33:20.678397 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: E0319 09:33:20.678407 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: I0319 09:33:20.678414 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: E0319 09:33:20.678431 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: I0319 09:33:20.678437 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: E0319 09:33:20.678446 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-recovery-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: I0319 09:33:20.678453 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-recovery-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: E0319 09:33:20.678468 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678474 master-0 kubenswrapper[7385]: I0319 09:33:20.678475 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678974 master-0 kubenswrapper[7385]: I0319 09:33:20.678606 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678974 master-0 kubenswrapper[7385]: I0319 09:33:20.678617 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-cert-syncer" Mar 19 09:33:20.678974 master-0 kubenswrapper[7385]: I0319 09:33:20.678630 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager-recovery-controller" Mar 19 09:33:20.678974 master-0 kubenswrapper[7385]: I0319 09:33:20.678640 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.678974 master-0 kubenswrapper[7385]: I0319 09:33:20.678648 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:33:20.678974 master-0 kubenswrapper[7385]: I0319 09:33:20.678663 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:33:20.678974 master-0 kubenswrapper[7385]: I0319 09:33:20.678677 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.679249 master-0 kubenswrapper[7385]: E0319 09:33:20.679083 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.679249 master-0 kubenswrapper[7385]: I0319 09:33:20.679096 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.679249 master-0 kubenswrapper[7385]: E0319 09:33:20.679106 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:33:20.679249 master-0 kubenswrapper[7385]: I0319 09:33:20.679113 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:33:20.679249 master-0 kubenswrapper[7385]: I0319 09:33:20.679250 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.679448 master-0 kubenswrapper[7385]: I0319 09:33:20.679267 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:33:20.759689 master-0 kubenswrapper[7385]: I0319 09:33:20.759586 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:20.759825 master-0 kubenswrapper[7385]: I0319 09:33:20.759748 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:20.805964 master-0 kubenswrapper[7385]: I0319 09:33:20.805926 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 19 09:33:20.809921 master-0 kubenswrapper[7385]: I0319 09:33:20.809841 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/cluster-policy-controller/3.log" Mar 19 09:33:20.812076 master-0 kubenswrapper[7385]: I0319 09:33:20.812033 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager-cert-syncer/0.log" Mar 19 09:33:20.812605 master-0 kubenswrapper[7385]: I0319 09:33:20.812571 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager/0.log" Mar 19 09:33:20.812658 master-0 kubenswrapper[7385]: I0319 09:33:20.812616 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="6ef69a9aa568c569e28a8cf9a8398ecd1d39a543a999398bc8742b280aa881bd" exitCode=0 Mar 19 09:33:20.812658 master-0 kubenswrapper[7385]: I0319 09:33:20.812640 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="33fbab3dae4d95c59279d28953be3dee55bacb9a970231a9a8855ae0fd8f5ddd" exitCode=0 Mar 19 09:33:20.812658 master-0 kubenswrapper[7385]: I0319 09:33:20.812650 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="759c350f58d15a9fddc9b4b3e95d92a6db8fdfb3e82a8bd183c2d36ff84c76ed" exitCode=0 Mar 19 09:33:20.812658 master-0 kubenswrapper[7385]: I0319 09:33:20.812658 7385 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="9e6e66f3c8a2bf098bd2b9e3696054dca0c7f2ceb70c56ac6de3f703458cdd0e" exitCode=2 Mar 19 09:33:20.812915 master-0 kubenswrapper[7385]: I0319 09:33:20.812885 7385 scope.go:117] "RemoveContainer" containerID="d24e8c91ba8a19de20e7fb8f40e6af7850a04f5e908516d33b577317e80e112c" Mar 19 09:33:20.835869 master-0 kubenswrapper[7385]: I0319 09:33:20.831950 7385 scope.go:117] "RemoveContainer" containerID="57d853d1ec8afcb012b4bb8c0bf03fdeac8c6cbef5eb24aa2fea3d5801611fb9" Mar 19 09:33:20.852722 master-0 kubenswrapper[7385]: I0319 09:33:20.852686 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager-cert-syncer/0.log" Mar 19 09:33:20.853167 master-0 kubenswrapper[7385]: I0319 09:33:20.853129 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:20.856504 master-0 kubenswrapper[7385]: I0319 09:33:20.856455 7385 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="67658b93f6f5927402b87ec35623e46e" podUID="d3939b09ae7c21557b3dd5ab01349318" Mar 19 09:33:20.861202 master-0 kubenswrapper[7385]: I0319 09:33:20.861169 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:20.861299 master-0 kubenswrapper[7385]: I0319 09:33:20.861278 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:20.861430 master-0 kubenswrapper[7385]: I0319 09:33:20.861311 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:20.861430 master-0 kubenswrapper[7385]: I0319 09:33:20.861290 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:20.963359 master-0 kubenswrapper[7385]: I0319 09:33:20.962365 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-resource-dir\") pod \"67658b93f6f5927402b87ec35623e46e\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " Mar 19 09:33:20.963359 master-0 kubenswrapper[7385]: I0319 09:33:20.962461 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-cert-dir\") pod \"67658b93f6f5927402b87ec35623e46e\" (UID: \"67658b93f6f5927402b87ec35623e46e\") " Mar 19 09:33:20.963359 master-0 kubenswrapper[7385]: I0319 09:33:20.962492 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "67658b93f6f5927402b87ec35623e46e" (UID: "67658b93f6f5927402b87ec35623e46e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:20.963359 master-0 kubenswrapper[7385]: I0319 09:33:20.962613 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "67658b93f6f5927402b87ec35623e46e" (UID: "67658b93f6f5927402b87ec35623e46e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:20.963359 master-0 kubenswrapper[7385]: I0319 09:33:20.962819 7385 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:20.963359 master-0 kubenswrapper[7385]: I0319 09:33:20.962831 7385 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/67658b93f6f5927402b87ec35623e46e-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:21.026824 master-0 kubenswrapper[7385]: I0319 09:33:21.026786 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:33:21.165306 master-0 kubenswrapper[7385]: I0319 09:33:21.165234 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-var-lock\") pod \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " Mar 19 09:33:21.165306 master-0 kubenswrapper[7385]: I0319 09:33:21.165287 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access\") pod \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " Mar 19 09:33:21.165732 master-0 kubenswrapper[7385]: I0319 09:33:21.165329 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kubelet-dir\") pod \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\" (UID: \"e0ce846a-f7ca-4f96-9bb4-509d084dcec1\") " Mar 19 09:33:21.165732 master-0 kubenswrapper[7385]: I0319 09:33:21.165364 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-var-lock" (OuterVolumeSpecName: "var-lock") pod "e0ce846a-f7ca-4f96-9bb4-509d084dcec1" (UID: "e0ce846a-f7ca-4f96-9bb4-509d084dcec1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:21.165732 master-0 kubenswrapper[7385]: I0319 09:33:21.165476 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e0ce846a-f7ca-4f96-9bb4-509d084dcec1" (UID: "e0ce846a-f7ca-4f96-9bb4-509d084dcec1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:21.165732 master-0 kubenswrapper[7385]: I0319 09:33:21.165619 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:21.165732 master-0 kubenswrapper[7385]: I0319 09:33:21.165634 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:21.168505 master-0 kubenswrapper[7385]: I0319 09:33:21.168442 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e0ce846a-f7ca-4f96-9bb4-509d084dcec1" (UID: "e0ce846a-f7ca-4f96-9bb4-509d084dcec1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:33:21.267979 master-0 kubenswrapper[7385]: I0319 09:33:21.267898 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce846a-f7ca-4f96-9bb4-509d084dcec1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:21.825047 master-0 kubenswrapper[7385]: I0319 09:33:21.824938 7385 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_67658b93f6f5927402b87ec35623e46e/kube-controller-manager-cert-syncer/0.log" Mar 19 09:33:21.825930 master-0 kubenswrapper[7385]: I0319 09:33:21.825200 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:21.825930 master-0 kubenswrapper[7385]: I0319 09:33:21.825208 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8911f4949ad2b1026cf67388b4c856ca207ee327d335f0a0ffbddeb06f138626" Mar 19 09:33:21.827621 master-0 kubenswrapper[7385]: I0319 09:33:21.827345 7385 generic.go:334] "Generic (PLEG): container finished" podID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerID="542dee821bce6b00fb4a89381e02478e42521bf4fb5559fd959616b012db8e61" exitCode=0 Mar 19 09:33:21.828693 master-0 kubenswrapper[7385]: I0319 09:33:21.827402 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"c20d34ff-5b2a-4142-802f-57a7a38c5a12","Type":"ContainerDied","Data":"542dee821bce6b00fb4a89381e02478e42521bf4fb5559fd959616b012db8e61"} Mar 19 09:33:21.830880 master-0 kubenswrapper[7385]: I0319 09:33:21.830807 7385 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="67658b93f6f5927402b87ec35623e46e" podUID="d3939b09ae7c21557b3dd5ab01349318" Mar 19 09:33:21.835900 master-0 kubenswrapper[7385]: I0319 09:33:21.835789 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"e0ce846a-f7ca-4f96-9bb4-509d084dcec1","Type":"ContainerDied","Data":"fa6f8cb5d8c6bf0298daad9cbc84db09fdcf39078ac76e6417bed28402a86c24"} Mar 19 09:33:21.835900 master-0 kubenswrapper[7385]: I0319 09:33:21.835877 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6f8cb5d8c6bf0298daad9cbc84db09fdcf39078ac76e6417bed28402a86c24" Mar 19 09:33:21.836112 master-0 kubenswrapper[7385]: I0319 09:33:21.835883 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:33:21.866321 master-0 kubenswrapper[7385]: I0319 09:33:21.866227 7385 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="67658b93f6f5927402b87ec35623e46e" podUID="d3939b09ae7c21557b3dd5ab01349318" Mar 19 09:33:22.530630 master-0 kubenswrapper[7385]: I0319 09:33:22.530569 7385 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:33:22.530972 master-0 kubenswrapper[7385]: E0319 09:33:22.530922 7385 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-vfnhd_openshift-ingress-operator(8bdeb4f3-99f7-44ef-beac-53c3cc073c5a)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" podUID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" Mar 19 09:33:22.539775 master-0 kubenswrapper[7385]: I0319 09:33:22.539713 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67658b93f6f5927402b87ec35623e46e" path="/var/lib/kubelet/pods/67658b93f6f5927402b87ec35623e46e/volumes" Mar 19 09:33:23.216593 master-0 kubenswrapper[7385]: I0319 09:33:23.216513 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:33:23.298238 master-0 kubenswrapper[7385]: I0319 09:33:23.298168 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-var-lock\") pod \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " Mar 19 09:33:23.298460 master-0 kubenswrapper[7385]: I0319 09:33:23.298277 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kube-api-access\") pod \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " Mar 19 09:33:23.298460 master-0 kubenswrapper[7385]: I0319 09:33:23.298298 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-var-lock" (OuterVolumeSpecName: "var-lock") pod "c20d34ff-5b2a-4142-802f-57a7a38c5a12" (UID: "c20d34ff-5b2a-4142-802f-57a7a38c5a12"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:23.298460 master-0 kubenswrapper[7385]: I0319 09:33:23.298323 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kubelet-dir\") pod \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\" (UID: \"c20d34ff-5b2a-4142-802f-57a7a38c5a12\") " Mar 19 09:33:23.298578 master-0 kubenswrapper[7385]: I0319 09:33:23.298490 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c20d34ff-5b2a-4142-802f-57a7a38c5a12" (UID: "c20d34ff-5b2a-4142-802f-57a7a38c5a12"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:23.298911 master-0 kubenswrapper[7385]: I0319 09:33:23.298883 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:23.298963 master-0 kubenswrapper[7385]: I0319 09:33:23.298911 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:23.300950 master-0 kubenswrapper[7385]: I0319 09:33:23.300877 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c20d34ff-5b2a-4142-802f-57a7a38c5a12" (UID: "c20d34ff-5b2a-4142-802f-57a7a38c5a12"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:33:23.399808 master-0 kubenswrapper[7385]: I0319 09:33:23.399731 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20d34ff-5b2a-4142-802f-57a7a38c5a12-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:23.849006 master-0 kubenswrapper[7385]: I0319 09:33:23.848901 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"c20d34ff-5b2a-4142-802f-57a7a38c5a12","Type":"ContainerDied","Data":"b4944acb6dda035dde270308345019acdc87bd2a81d8b65e1c0a2845a63c510d"} Mar 19 09:33:23.849006 master-0 kubenswrapper[7385]: I0319 09:33:23.848950 7385 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4944acb6dda035dde270308345019acdc87bd2a81d8b65e1c0a2845a63c510d" Mar 19 09:33:23.849006 master-0 kubenswrapper[7385]: I0319 09:33:23.848945 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:33:24.865331 master-0 kubenswrapper[7385]: I0319 09:33:24.865285 7385 generic.go:334] "Generic (PLEG): container finished" podID="57227a66-c758-4a46-a5e1-f603baa3f570" containerID="fd8bb80d426a5da3f781ac199d36ba296827076a405918db4a564ba51e18307a" exitCode=0 Mar 19 09:33:24.865863 master-0 kubenswrapper[7385]: I0319 09:33:24.865333 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerDied","Data":"fd8bb80d426a5da3f781ac199d36ba296827076a405918db4a564ba51e18307a"} Mar 19 09:33:24.865863 master-0 kubenswrapper[7385]: I0319 09:33:24.865366 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" event={"ID":"57227a66-c758-4a46-a5e1-f603baa3f570","Type":"ContainerStarted","Data":"b5bec82e7cf7c425b75b7ebedc58010764678db16a2779038ea14c05819829cd"} Mar 19 09:33:24.865863 master-0 kubenswrapper[7385]: I0319 09:33:24.865385 7385 scope.go:117] "RemoveContainer" containerID="2fae7b44934deb2f61dfa30059ff2a9d4e27ce928263e021c35df2bf0416f39e" Mar 19 09:33:25.246641 master-0 kubenswrapper[7385]: I0319 09:33:25.246405 7385 scope.go:117] "RemoveContainer" containerID="759c350f58d15a9fddc9b4b3e95d92a6db8fdfb3e82a8bd183c2d36ff84c76ed" Mar 19 09:33:25.268785 master-0 kubenswrapper[7385]: I0319 09:33:25.268741 7385 scope.go:117] "RemoveContainer" containerID="9e6e66f3c8a2bf098bd2b9e3696054dca0c7f2ceb70c56ac6de3f703458cdd0e" Mar 19 09:33:25.729029 master-0 kubenswrapper[7385]: I0319 09:33:25.728930 7385 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:25.732770 master-0 kubenswrapper[7385]: I0319 09:33:25.732717 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:25.732770 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:25.732770 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:25.732770 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:25.733111 master-0 kubenswrapper[7385]: I0319 09:33:25.732771 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:25.951030 master-0 kubenswrapper[7385]: I0319 09:33:25.950936 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:33:25.951527 master-0 kubenswrapper[7385]: E0319 09:33:25.951465 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerName="installer" Mar 19 09:33:25.951527 master-0 kubenswrapper[7385]: I0319 09:33:25.951486 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerName="installer" Mar 19 09:33:25.951527 master-0 kubenswrapper[7385]: E0319 09:33:25.951509 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerName="installer" Mar 19 09:33:25.951527 master-0 kubenswrapper[7385]: I0319 09:33:25.951518 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerName="installer" Mar 19 09:33:25.951739 master-0 kubenswrapper[7385]: I0319 09:33:25.951706 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerName="installer" Mar 19 09:33:25.951739 master-0 kubenswrapper[7385]: I0319 09:33:25.951733 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerName="installer" Mar 19 09:33:25.952364 master-0 kubenswrapper[7385]: I0319 09:33:25.952329 7385 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 09:33:25.952567 master-0 kubenswrapper[7385]: I0319 09:33:25.952493 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:25.952740 master-0 kubenswrapper[7385]: I0319 09:33:25.952692 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://0ea74be9ce6a8db82cc76cb8b1abbace62eee2a97494f9a8b0c0af4311285f49" gracePeriod=15 Mar 19 09:33:25.952892 master-0 kubenswrapper[7385]: I0319 09:33:25.952822 7385 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4ff4b935126cc5d750c1d850d7bd8bc2f70fd6fa92c703e7c39a069db8572af3" gracePeriod=15 Mar 19 09:33:25.955729 master-0 kubenswrapper[7385]: I0319 09:33:25.955658 7385 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 09:33:25.956066 master-0 kubenswrapper[7385]: E0319 09:33:25.956014 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 09:33:25.956066 master-0 kubenswrapper[7385]: I0319 09:33:25.956046 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 09:33:25.956066 master-0 kubenswrapper[7385]: E0319 09:33:25.956065 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 09:33:25.956263 master-0 kubenswrapper[7385]: I0319 09:33:25.956080 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 09:33:25.956263 master-0 kubenswrapper[7385]: E0319 09:33:25.956103 7385 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 09:33:25.956263 master-0 kubenswrapper[7385]: I0319 09:33:25.956115 7385 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 09:33:25.956471 master-0 kubenswrapper[7385]: I0319 09:33:25.956339 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 09:33:25.956471 master-0 kubenswrapper[7385]: I0319 09:33:25.956374 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 09:33:25.956471 master-0 kubenswrapper[7385]: I0319 09:33:25.956433 7385 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 09:33:25.959598 master-0 kubenswrapper[7385]: I0319 09:33:25.959530 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.012866 master-0 kubenswrapper[7385]: E0319 09:33:26.012789 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.032587 master-0 kubenswrapper[7385]: E0319 09:33:26.032426 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.032971 master-0 kubenswrapper[7385]: I0319 09:33:26.032928 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.033027 master-0 kubenswrapper[7385]: I0319 09:33:26.032998 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.033084 master-0 kubenswrapper[7385]: I0319 09:33:26.033027 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.033084 master-0 kubenswrapper[7385]: I0319 09:33:26.033055 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.033159 master-0 kubenswrapper[7385]: I0319 09:33:26.033131 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134077 master-0 kubenswrapper[7385]: I0319 09:33:26.134021 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134279 master-0 kubenswrapper[7385]: I0319 09:33:26.134186 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134279 master-0 kubenswrapper[7385]: I0319 09:33:26.134236 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.134351 master-0 kubenswrapper[7385]: I0319 09:33:26.134279 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134351 master-0 kubenswrapper[7385]: I0319 09:33:26.134313 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.134407 master-0 kubenswrapper[7385]: I0319 09:33:26.134354 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134592 master-0 kubenswrapper[7385]: I0319 09:33:26.134450 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134592 master-0 kubenswrapper[7385]: I0319 09:33:26.134507 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134592 master-0 kubenswrapper[7385]: I0319 09:33:26.134535 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134712 master-0 kubenswrapper[7385]: I0319 09:33:26.134591 7385 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.134712 master-0 kubenswrapper[7385]: I0319 09:33:26.134611 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134712 master-0 kubenswrapper[7385]: I0319 09:33:26.134613 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.134712 master-0 kubenswrapper[7385]: I0319 09:33:26.134596 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.237156 master-0 kubenswrapper[7385]: I0319 09:33:26.237044 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.237414 master-0 kubenswrapper[7385]: I0319 09:33:26.237183 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.237414 master-0 kubenswrapper[7385]: I0319 09:33:26.237234 7385 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.237414 master-0 kubenswrapper[7385]: I0319 09:33:26.237367 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.237603 master-0 kubenswrapper[7385]: I0319 09:33:26.237429 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.237603 master-0 kubenswrapper[7385]: I0319 09:33:26.237470 7385 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.313795 master-0 kubenswrapper[7385]: I0319 09:33:26.313634 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.332894 master-0 kubenswrapper[7385]: W0319 09:33:26.332831 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fb4ea7f83036d9c6adf3454fc7e9db.slice/crio-d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598 WatchSource:0}: Error finding container d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598: Status 404 returned error can't find the container with id d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598 Mar 19 09:33:26.333416 master-0 kubenswrapper[7385]: I0319 09:33:26.333376 7385 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.335698 master-0 kubenswrapper[7385]: E0319 09:33:26.335508 7385 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e344c7c161141 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:16fb4ea7f83036d9c6adf3454fc7e9db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:33:26.334533953 +0000 UTC m=+902.008963644,LastTimestamp:2026-03-19 09:33:26.334533953 +0000 UTC m=+902.008963644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:33:26.379244 master-0 kubenswrapper[7385]: W0319 09:33:26.379149 7385 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5ce05b3d592e63f1f92202d52b9635.slice/crio-cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257 WatchSource:0}: Error finding container cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257: Status 404 returned error can't find the container with id cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257 Mar 19 09:33:26.730315 master-0 kubenswrapper[7385]: I0319 09:33:26.730253 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:26.730315 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:26.730315 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:26.730315 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:26.730625 master-0 kubenswrapper[7385]: I0319 09:33:26.730327 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:26.896416 master-0 kubenswrapper[7385]: I0319 09:33:26.896333 7385 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="4ff4b935126cc5d750c1d850d7bd8bc2f70fd6fa92c703e7c39a069db8572af3" exitCode=0 Mar 19 09:33:26.898393 master-0 kubenswrapper[7385]: I0319 09:33:26.898343 7385 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244" exitCode=0 Mar 19 09:33:26.898568 master-0 kubenswrapper[7385]: I0319 09:33:26.898387 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244"} Mar 19 09:33:26.898568 master-0 kubenswrapper[7385]: I0319 09:33:26.898458 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257"} Mar 19 09:33:26.900026 master-0 kubenswrapper[7385]: I0319 09:33:26.899976 7385 generic.go:334] "Generic (PLEG): container finished" podID="98826625-8de0-4bf7-8926-ec62517369e5" containerID="47f63f0f88f52262ec4bb448c720e1d131874e1c77a757276ce8eb2d6c24cab5" exitCode=0 Mar 19 09:33:26.900122 master-0 kubenswrapper[7385]: I0319 09:33:26.900040 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"98826625-8de0-4bf7-8926-ec62517369e5","Type":"ContainerDied","Data":"47f63f0f88f52262ec4bb448c720e1d131874e1c77a757276ce8eb2d6c24cab5"} Mar 19 09:33:26.901275 master-0 kubenswrapper[7385]: I0319 09:33:26.901217 7385 status_manager.go:851] "Failed to get status for pod" podUID="98826625-8de0-4bf7-8926-ec62517369e5" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:33:26.901412 master-0 kubenswrapper[7385]: E0319 09:33:26.901348 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:26.901881 master-0 kubenswrapper[7385]: I0319 09:33:26.901833 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578"} Mar 19 09:33:26.901979 master-0 kubenswrapper[7385]: I0319 09:33:26.901889 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598"} Mar 19 09:33:26.902767 master-0 kubenswrapper[7385]: E0319 09:33:26.902720 7385 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:26.902850 master-0 kubenswrapper[7385]: I0319 09:33:26.902709 7385 status_manager.go:851] "Failed to get status for pod" podUID="98826625-8de0-4bf7-8926-ec62517369e5" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:33:27.730956 master-0 kubenswrapper[7385]: I0319 09:33:27.730904 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:27.730956 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:27.730956 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:27.730956 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:27.731482 master-0 kubenswrapper[7385]: I0319 09:33:27.730970 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:27.917889 master-0 kubenswrapper[7385]: I0319 09:33:27.917770 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5"} Mar 19 09:33:27.917889 master-0 kubenswrapper[7385]: I0319 09:33:27.917826 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45"} Mar 19 09:33:27.917889 master-0 kubenswrapper[7385]: I0319 09:33:27.917837 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1"} Mar 19 09:33:27.917889 master-0 kubenswrapper[7385]: I0319 09:33:27.917846 7385 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573"} Mar 19 09:33:28.254787 master-0 kubenswrapper[7385]: I0319 09:33:28.254746 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:33:28.260093 master-0 kubenswrapper[7385]: I0319 09:33:28.260032 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:28.366259 master-0 kubenswrapper[7385]: I0319 09:33:28.366201 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 09:33:28.366508 master-0 kubenswrapper[7385]: I0319 09:33:28.366314 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 09:33:28.366508 master-0 kubenswrapper[7385]: I0319 09:33:28.366345 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config" (OuterVolumeSpecName: "config") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.366508 master-0 kubenswrapper[7385]: I0319 09:33:28.366428 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.366508 master-0 kubenswrapper[7385]: I0319 09:33:28.366496 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"98826625-8de0-4bf7-8926-ec62517369e5\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " Mar 19 09:33:28.366706 master-0 kubenswrapper[7385]: I0319 09:33:28.366521 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"98826625-8de0-4bf7-8926-ec62517369e5\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " Mar 19 09:33:28.366706 master-0 kubenswrapper[7385]: I0319 09:33:28.366605 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98826625-8de0-4bf7-8926-ec62517369e5" (UID: "98826625-8de0-4bf7-8926-ec62517369e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.366706 master-0 kubenswrapper[7385]: I0319 09:33:28.366691 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock" (OuterVolumeSpecName: "var-lock") pod "98826625-8de0-4bf7-8926-ec62517369e5" (UID: "98826625-8de0-4bf7-8926-ec62517369e5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.366826 master-0 kubenswrapper[7385]: I0319 09:33:28.366746 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 09:33:28.366826 master-0 kubenswrapper[7385]: I0319 09:33:28.366801 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"98826625-8de0-4bf7-8926-ec62517369e5\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " Mar 19 09:33:28.366826 master-0 kubenswrapper[7385]: I0319 09:33:28.366822 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.366951 master-0 kubenswrapper[7385]: I0319 09:33:28.366854 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 09:33:28.366951 master-0 kubenswrapper[7385]: I0319 09:33:28.366911 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 09:33:28.367035 master-0 kubenswrapper[7385]: I0319 09:33:28.366935 7385 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 19 09:33:28.367035 master-0 kubenswrapper[7385]: I0319 09:33:28.366995 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs" (OuterVolumeSpecName: "logs") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.367035 master-0 kubenswrapper[7385]: I0319 09:33:28.366995 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.367155 master-0 kubenswrapper[7385]: I0319 09:33:28.367050 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets" (OuterVolumeSpecName: "secrets") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367431 7385 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367454 7385 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367466 7385 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367476 7385 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367485 7385 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367510 7385 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367521 7385 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.367617 master-0 kubenswrapper[7385]: I0319 09:33:28.367530 7385 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.372570 master-0 kubenswrapper[7385]: I0319 09:33:28.369637 7385 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98826625-8de0-4bf7-8926-ec62517369e5" (UID: "98826625-8de0-4bf7-8926-ec62517369e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:33:28.468926 master-0 kubenswrapper[7385]: I0319 09:33:28.468856 7385 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:28.536671 master-0 kubenswrapper[7385]: I0319 09:33:28.536629 7385 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 19 09:33:28.537017 master-0 kubenswrapper[7385]: I0319 09:33:28.536991 7385 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 19 09:33:28.730117 master-0 kubenswrapper[7385]: I0319 09:33:28.729955 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:28.730117 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:28.730117 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:28.730117 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:28.730117 master-0 kubenswrapper[7385]: I0319 09:33:28.730084 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:28.925338 master-0 kubenswrapper[7385]: I0319 09:33:28.925283 7385 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="0ea74be9ce6a8db82cc76cb8b1abbace62eee2a97494f9a8b0c0af4311285f49" exitCode=0 Mar 19 09:33:28.925888 master-0 kubenswrapper[7385]: I0319 09:33:28.925413 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 19 09:33:28.931108 master-0 kubenswrapper[7385]: I0319 09:33:28.931065 7385 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:29.733139 master-0 kubenswrapper[7385]: I0319 09:33:29.733082 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:29.733139 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:29.733139 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:29.733139 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:29.733484 master-0 kubenswrapper[7385]: I0319 09:33:29.733149 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:30.730705 master-0 kubenswrapper[7385]: I0319 09:33:30.730630 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:30.730705 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:30.730705 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:30.730705 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:30.731436 master-0 kubenswrapper[7385]: I0319 09:33:30.730706 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:31.731770 master-0 kubenswrapper[7385]: I0319 09:33:31.731700 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:31.731770 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:31.731770 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:31.731770 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:31.732515 master-0 kubenswrapper[7385]: I0319 09:33:31.731802 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:32.731054 master-0 kubenswrapper[7385]: I0319 09:33:32.730987 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:32.731054 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:32.731054 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:32.731054 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:32.731330 master-0 kubenswrapper[7385]: I0319 09:33:32.731057 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:33.739695 master-0 kubenswrapper[7385]: I0319 09:33:33.739322 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:33.739695 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:33.739695 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:33.739695 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:33.739695 master-0 kubenswrapper[7385]: I0319 09:33:33.739437 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:34.649639 master-0 kubenswrapper[7385]: I0319 09:33:34.649592 7385 request.go:700] Waited for 1.013752092s, retries: 1, retry-after: 5s - retry-reason: 503 - request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=13026&timeout=53m37s&timeoutSeconds=3217&watch=true Mar 19 09:33:34.730444 master-0 kubenswrapper[7385]: I0319 09:33:34.730401 7385 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-k99cg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 19 09:33:34.730444 master-0 kubenswrapper[7385]: [-]has-synced failed: reason withheld Mar 19 09:33:34.730444 master-0 kubenswrapper[7385]: [+]process-running ok Mar 19 09:33:34.730444 master-0 kubenswrapper[7385]: healthz check failed Mar 19 09:33:34.730811 master-0 kubenswrapper[7385]: I0319 09:33:34.730779 7385 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" podUID="57227a66-c758-4a46-a5e1-f603baa3f570" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 19 09:33:34.960839 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 19 09:33:34.990364 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 09:33:34.990636 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 19 09:33:34.991497 master-0 systemd[1]: kubelet.service: Consumed 1min 58.340s CPU time. Mar 19 09:33:35.006373 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 19 09:33:35.135794 master-0 kubenswrapper[27819]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:33:35.135794 master-0 kubenswrapper[27819]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 19 09:33:35.135794 master-0 kubenswrapper[27819]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:33:35.135794 master-0 kubenswrapper[27819]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:33:35.135794 master-0 kubenswrapper[27819]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 19 09:33:35.135794 master-0 kubenswrapper[27819]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 09:33:35.135794 master-0 kubenswrapper[27819]: I0319 09:33:35.133523 27819 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137251 27819 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137271 27819 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137277 27819 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137281 27819 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137286 27819 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137289 27819 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137293 27819 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137297 27819 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137301 27819 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137305 27819 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137310 27819 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137315 27819 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137319 27819 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137323 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137327 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137331 27819 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137334 27819 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137338 27819 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:33:35.139200 master-0 kubenswrapper[27819]: W0319 09:33:35.137342 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137345 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137349 27819 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137352 27819 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137356 27819 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137360 27819 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137363 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137367 27819 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137370 27819 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137374 27819 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137378 27819 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137381 27819 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137385 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137393 27819 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137398 27819 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137402 27819 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137405 27819 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137409 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137413 27819 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137417 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:33:35.139819 master-0 kubenswrapper[27819]: W0319 09:33:35.137421 27819 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137424 27819 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137428 27819 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137432 27819 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137435 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137439 27819 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137442 27819 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137446 27819 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137449 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137454 27819 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137458 27819 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137462 27819 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137466 27819 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137471 27819 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137475 27819 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137478 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137482 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137485 27819 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137489 27819 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137493 27819 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:33:35.140394 master-0 kubenswrapper[27819]: W0319 09:33:35.137496 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137500 27819 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137503 27819 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137507 27819 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137510 27819 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137514 27819 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137519 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137523 27819 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137526 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137530 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137535 27819 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137555 27819 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137560 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: W0319 09:33:35.137564 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137652 27819 flags.go:64] FLAG: --address="0.0.0.0" Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137661 27819 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137668 27819 flags.go:64] FLAG: --anonymous-auth="true" Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137674 27819 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137679 27819 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137684 27819 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137690 27819 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 19 09:33:35.140915 master-0 kubenswrapper[27819]: I0319 09:33:35.137695 27819 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137700 27819 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137704 27819 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137709 27819 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137713 27819 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137717 27819 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137722 27819 flags.go:64] FLAG: --cgroup-root="" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137726 27819 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137730 27819 flags.go:64] FLAG: --client-ca-file="" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137734 27819 flags.go:64] FLAG: --cloud-config="" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137738 27819 flags.go:64] FLAG: --cloud-provider="" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137743 27819 flags.go:64] FLAG: --cluster-dns="[]" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137749 27819 flags.go:64] FLAG: --cluster-domain="" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137754 27819 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137758 27819 flags.go:64] FLAG: --config-dir="" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137762 27819 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137767 27819 flags.go:64] FLAG: --container-log-max-files="5" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137772 27819 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137777 27819 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137781 27819 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137785 27819 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137789 27819 flags.go:64] FLAG: --contention-profiling="false" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137793 27819 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137797 27819 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137802 27819 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 19 09:33:35.141717 master-0 kubenswrapper[27819]: I0319 09:33:35.137806 27819 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137815 27819 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137821 27819 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137826 27819 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137831 27819 flags.go:64] FLAG: --enable-load-reader="false" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137836 27819 flags.go:64] FLAG: --enable-server="true" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137841 27819 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137847 27819 flags.go:64] FLAG: --event-burst="100" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137852 27819 flags.go:64] FLAG: --event-qps="50" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137856 27819 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137861 27819 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137866 27819 flags.go:64] FLAG: --eviction-hard="" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137872 27819 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137877 27819 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137883 27819 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.137888 27819 flags.go:64] FLAG: --eviction-soft="" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138069 27819 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138075 27819 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138080 27819 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138085 27819 flags.go:64] FLAG: --experimental-mounter-path="" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138091 27819 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138096 27819 flags.go:64] FLAG: --fail-swap-on="true" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138102 27819 flags.go:64] FLAG: --feature-gates="" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138109 27819 flags.go:64] FLAG: --file-check-frequency="20s" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138115 27819 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 19 09:33:35.142449 master-0 kubenswrapper[27819]: I0319 09:33:35.138121 27819 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138126 27819 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138131 27819 flags.go:64] FLAG: --healthz-port="10248" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138137 27819 flags.go:64] FLAG: --help="false" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138142 27819 flags.go:64] FLAG: --hostname-override="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138147 27819 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138152 27819 flags.go:64] FLAG: --http-check-frequency="20s" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138157 27819 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138161 27819 flags.go:64] FLAG: --image-credential-provider-config="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138165 27819 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138170 27819 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138175 27819 flags.go:64] FLAG: --image-service-endpoint="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138179 27819 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138183 27819 flags.go:64] FLAG: --kube-api-burst="100" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138188 27819 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138192 27819 flags.go:64] FLAG: --kube-api-qps="50" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138196 27819 flags.go:64] FLAG: --kube-reserved="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138200 27819 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138205 27819 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138209 27819 flags.go:64] FLAG: --kubelet-cgroups="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138214 27819 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138217 27819 flags.go:64] FLAG: --lock-file="" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138222 27819 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138226 27819 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138230 27819 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138237 27819 flags.go:64] FLAG: --log-json-split-stream="false" Mar 19 09:33:35.143127 master-0 kubenswrapper[27819]: I0319 09:33:35.138241 27819 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138245 27819 flags.go:64] FLAG: --log-text-split-stream="false" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138249 27819 flags.go:64] FLAG: --logging-format="text" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138254 27819 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138258 27819 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138262 27819 flags.go:64] FLAG: --manifest-url="" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138266 27819 flags.go:64] FLAG: --manifest-url-header="" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138272 27819 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138276 27819 flags.go:64] FLAG: --max-open-files="1000000" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138281 27819 flags.go:64] FLAG: --max-pods="110" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138286 27819 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138290 27819 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138294 27819 flags.go:64] FLAG: --memory-manager-policy="None" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138299 27819 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138303 27819 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138307 27819 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138311 27819 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138326 27819 flags.go:64] FLAG: --node-status-max-images="50" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138330 27819 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138334 27819 flags.go:64] FLAG: --oom-score-adj="-999" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138340 27819 flags.go:64] FLAG: --pod-cidr="" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138344 27819 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138350 27819 flags.go:64] FLAG: --pod-manifest-path="" Mar 19 09:33:35.145175 master-0 kubenswrapper[27819]: I0319 09:33:35.138355 27819 flags.go:64] FLAG: --pod-max-pids="-1" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138359 27819 flags.go:64] FLAG: --pods-per-core="0" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138363 27819 flags.go:64] FLAG: --port="10250" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138367 27819 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138371 27819 flags.go:64] FLAG: --provider-id="" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138376 27819 flags.go:64] FLAG: --qos-reserved="" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138380 27819 flags.go:64] FLAG: --read-only-port="10255" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138384 27819 flags.go:64] FLAG: --register-node="true" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138388 27819 flags.go:64] FLAG: --register-schedulable="true" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138392 27819 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138399 27819 flags.go:64] FLAG: --registry-burst="10" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138403 27819 flags.go:64] FLAG: --registry-qps="5" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138407 27819 flags.go:64] FLAG: --reserved-cpus="" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138411 27819 flags.go:64] FLAG: --reserved-memory="" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138417 27819 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138421 27819 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138425 27819 flags.go:64] FLAG: --rotate-certificates="false" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138429 27819 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138433 27819 flags.go:64] FLAG: --runonce="false" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138437 27819 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138442 27819 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138446 27819 flags.go:64] FLAG: --seccomp-default="false" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138450 27819 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138454 27819 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138458 27819 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138462 27819 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 19 09:33:35.145946 master-0 kubenswrapper[27819]: I0319 09:33:35.138467 27819 flags.go:64] FLAG: --storage-driver-password="root" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138474 27819 flags.go:64] FLAG: --storage-driver-secure="false" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138478 27819 flags.go:64] FLAG: --storage-driver-table="stats" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138482 27819 flags.go:64] FLAG: --storage-driver-user="root" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138486 27819 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138491 27819 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138495 27819 flags.go:64] FLAG: --system-cgroups="" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138500 27819 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138506 27819 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138510 27819 flags.go:64] FLAG: --tls-cert-file="" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138514 27819 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138520 27819 flags.go:64] FLAG: --tls-min-version="" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138524 27819 flags.go:64] FLAG: --tls-private-key-file="" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138528 27819 flags.go:64] FLAG: --topology-manager-policy="none" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138532 27819 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138536 27819 flags.go:64] FLAG: --topology-manager-scope="container" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138557 27819 flags.go:64] FLAG: --v="2" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138563 27819 flags.go:64] FLAG: --version="false" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138568 27819 flags.go:64] FLAG: --vmodule="" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138575 27819 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: I0319 09:33:35.138581 27819 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: W0319 09:33:35.138716 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: W0319 09:33:35.138725 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: W0319 09:33:35.138729 27819 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:33:35.151036 master-0 kubenswrapper[27819]: W0319 09:33:35.138734 27819 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138738 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138743 27819 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138749 27819 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138756 27819 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138762 27819 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138768 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138774 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138779 27819 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138788 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138793 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138798 27819 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138804 27819 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138809 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138814 27819 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138821 27819 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138828 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138833 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138838 27819 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:33:35.152117 master-0 kubenswrapper[27819]: W0319 09:33:35.138845 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138849 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138854 27819 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138859 27819 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138863 27819 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138868 27819 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138872 27819 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138877 27819 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138882 27819 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138890 27819 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138894 27819 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138898 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138901 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138906 27819 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138910 27819 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138916 27819 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138920 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138925 27819 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138930 27819 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138934 27819 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:33:35.152805 master-0 kubenswrapper[27819]: W0319 09:33:35.138939 27819 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138943 27819 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138950 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138955 27819 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138959 27819 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138964 27819 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138969 27819 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138973 27819 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138978 27819 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138983 27819 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138989 27819 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138994 27819 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.138999 27819 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139004 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139008 27819 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139013 27819 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139019 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139023 27819 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139028 27819 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139032 27819 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:33:35.153475 master-0 kubenswrapper[27819]: W0319 09:33:35.139037 27819 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139049 27819 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139054 27819 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139059 27819 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139063 27819 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139068 27819 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139073 27819 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139077 27819 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139082 27819 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.139087 27819 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: I0319 09:33:35.139095 27819 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: I0319 09:33:35.145735 27819 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: I0319 09:33:35.145769 27819 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.145847 27819 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.145858 27819 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:33:35.154201 master-0 kubenswrapper[27819]: W0319 09:33:35.145866 27819 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145872 27819 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145877 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145881 27819 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145886 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145891 27819 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145895 27819 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145900 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145905 27819 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145909 27819 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145916 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145921 27819 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145925 27819 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145930 27819 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145934 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145939 27819 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145943 27819 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145948 27819 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145953 27819 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145957 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:33:35.154891 master-0 kubenswrapper[27819]: W0319 09:33:35.145962 27819 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.145967 27819 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.145972 27819 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.145977 27819 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.145981 27819 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.145986 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.145990 27819 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.145995 27819 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146001 27819 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146006 27819 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146013 27819 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146019 27819 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146025 27819 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146030 27819 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146036 27819 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146041 27819 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146046 27819 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146051 27819 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146056 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146061 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:33:35.155671 master-0 kubenswrapper[27819]: W0319 09:33:35.146066 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146071 27819 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146077 27819 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146082 27819 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146087 27819 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146092 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146098 27819 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146103 27819 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146109 27819 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146114 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146119 27819 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146124 27819 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146129 27819 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146135 27819 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146141 27819 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146146 27819 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146151 27819 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146156 27819 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146160 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146165 27819 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:33:35.156389 master-0 kubenswrapper[27819]: W0319 09:33:35.146171 27819 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146176 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146180 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146185 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146189 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146194 27819 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146199 27819 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146204 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146210 27819 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146215 27819 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: I0319 09:33:35.146223 27819 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146371 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146380 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146386 27819 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 19 09:33:35.157006 master-0 kubenswrapper[27819]: W0319 09:33:35.146390 27819 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146396 27819 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146400 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146405 27819 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146410 27819 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146414 27819 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146419 27819 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146423 27819 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146434 27819 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146440 27819 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146445 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146450 27819 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146454 27819 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146459 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146464 27819 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146468 27819 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146472 27819 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146477 27819 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146481 27819 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 19 09:33:35.157385 master-0 kubenswrapper[27819]: W0319 09:33:35.146486 27819 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146491 27819 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146496 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146500 27819 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146505 27819 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146509 27819 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146514 27819 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146519 27819 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146523 27819 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146527 27819 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146533 27819 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146560 27819 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146566 27819 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146571 27819 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146575 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146580 27819 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146585 27819 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146590 27819 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146594 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146598 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 19 09:33:35.157966 master-0 kubenswrapper[27819]: W0319 09:33:35.146602 27819 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146607 27819 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146612 27819 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146616 27819 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146622 27819 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146629 27819 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146634 27819 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146639 27819 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146645 27819 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146650 27819 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146654 27819 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146660 27819 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146666 27819 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146671 27819 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146676 27819 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146681 27819 feature_gate.go:330] unrecognized feature gate: Example Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146686 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146691 27819 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146697 27819 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146702 27819 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 19 09:33:35.158473 master-0 kubenswrapper[27819]: W0319 09:33:35.146707 27819 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146712 27819 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146718 27819 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146723 27819 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146728 27819 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146733 27819 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146738 27819 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146743 27819 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146749 27819 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: W0319 09:33:35.146755 27819 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: I0319 09:33:35.146762 27819 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: I0319 09:33:35.146942 27819 server.go:940] "Client rotation is on, will bootstrap in background" Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: I0319 09:33:35.148458 27819 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: I0319 09:33:35.148536 27819 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 09:33:35.159000 master-0 kubenswrapper[27819]: I0319 09:33:35.148777 27819 server.go:997] "Starting client certificate rotation" Mar 19 09:33:35.159343 master-0 kubenswrapper[27819]: I0319 09:33:35.148790 27819 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 19 09:33:35.159343 master-0 kubenswrapper[27819]: I0319 09:33:35.149382 27819 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 09:33:35.159343 master-0 kubenswrapper[27819]: I0319 09:33:35.150498 27819 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-20 09:09:07 +0000 UTC, rotation deadline is 2026-03-20 05:39:34.821472598 +0000 UTC Mar 19 09:33:35.159343 master-0 kubenswrapper[27819]: I0319 09:33:35.150531 27819 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h5m59.670944085s for next certificate rotation Mar 19 09:33:35.159343 master-0 kubenswrapper[27819]: I0319 09:33:35.150602 27819 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 19 09:33:35.159343 master-0 kubenswrapper[27819]: I0319 09:33:35.153453 27819 log.go:25] "Validated CRI v1 runtime API" Mar 19 09:33:35.163328 master-0 kubenswrapper[27819]: I0319 09:33:35.163281 27819 log.go:25] "Validated CRI v1 image API" Mar 19 09:33:35.164185 master-0 kubenswrapper[27819]: I0319 09:33:35.164147 27819 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 19 09:33:35.172565 master-0 kubenswrapper[27819]: I0319 09:33:35.172498 27819 fs.go:135] Filesystem UUIDs: map[433c3f11-76c1-4144-a2fc-7b9790746712:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 19 09:33:35.175628 master-0 kubenswrapper[27819]: I0319 09:33:35.172577 27819 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/05c047f1dd1f77466b4da70d7d89474989156a4dc7f05fb84cbb6a93b60f00f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/05c047f1dd1f77466b4da70d7d89474989156a4dc7f05fb84cbb6a93b60f00f0/userdata/shm major:0 minor:851 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/093a8e850a736d3eca3797467a6bc2ecea1fef6e909d2da61102bdda8dc94887/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/093a8e850a736d3eca3797467a6bc2ecea1fef6e909d2da61102bdda8dc94887/userdata/shm major:0 minor:827 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e413507c6f4a8e010e922bcd426014dd970b85408295730281ace1a504f9959/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e413507c6f4a8e010e922bcd426014dd970b85408295730281ace1a504f9959/userdata/shm major:0 minor:346 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65/userdata/shm major:0 minor:98 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1504c38858cfd6dba74a1e8e13c6787eab9fb680b233330961a4b98abfa59449/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1504c38858cfd6dba74a1e8e13c6787eab9fb680b233330961a4b98abfa59449/userdata/shm major:0 minor:306 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16ae0be12cb0948b576a88de76c552bf6bb4908608f91f6bc384118d39093798/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16ae0be12cb0948b576a88de76c552bf6bb4908608f91f6bc384118d39093798/userdata/shm major:0 minor:798 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/172085267d003a11af66385fae45641af5f2ea573dfe38357436fa95e4bfc2cb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/172085267d003a11af66385fae45641af5f2ea573dfe38357436fa95e4bfc2cb/userdata/shm major:0 minor:1032 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/204782aa21e2bf31865a1381946590d0ce8a970fb26f83eebd02fa7b0497c2c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/204782aa21e2bf31865a1381946590d0ce8a970fb26f83eebd02fa7b0497c2c5/userdata/shm major:0 minor:595 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6/userdata/shm major:0 minor:230 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/280c1ab0d20d5f0a1fc3fe957fae99e999c792256be0729f4bd66bf08519c5bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/280c1ab0d20d5f0a1fc3fe957fae99e999c792256be0729f4bd66bf08519c5bf/userdata/shm major:0 minor:84 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4/userdata/shm major:0 minor:1111 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b6eced12019f1a054184dc214ff7951a270b910027060a2b561a895337a163e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b6eced12019f1a054184dc214ff7951a270b910027060a2b561a895337a163e/userdata/shm major:0 minor:339 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b99a9e40477692f9f0735d27cce4c13db8b181a07746d8c9e160e5b7831c820/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b99a9e40477692f9f0735d27cce4c13db8b181a07746d8c9e160e5b7831c820/userdata/shm major:0 minor:236 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ba0c50971e9f4b73d6981687bf5599b2b14e3a056e01cd696dec3ae2bc23ec5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ba0c50971e9f4b73d6981687bf5599b2b14e3a056e01cd696dec3ae2bc23ec5/userdata/shm major:0 minor:528 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3014cb772787d6c5ed5213751efdfc2f600b71700a9642b8657868066aed7a56/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3014cb772787d6c5ed5213751efdfc2f600b71700a9642b8657868066aed7a56/userdata/shm major:0 minor:796 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/319cb3ca2c37415dc41e1160ebdc6c8cfc6a2108542dd10b877b244ac8b9e929/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/319cb3ca2c37415dc41e1160ebdc6c8cfc6a2108542dd10b877b244ac8b9e929/userdata/shm major:0 minor:490 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33ca2f2b19a1770d26eec6f100c1e6f12e2c50ac6dbb0f1fd1d1831103d4af22/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33ca2f2b19a1770d26eec6f100c1e6f12e2c50ac6dbb0f1fd1d1831103d4af22/userdata/shm major:0 minor:380 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39c6768818fedea75d87ad8b7a8640832bffe77cbf3d443982b6c9295adc4865/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39c6768818fedea75d87ad8b7a8640832bffe77cbf3d443982b6c9295adc4865/userdata/shm major:0 minor:641 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3a0665e823da7bfc0df78c1979cfd4c3ca72731bad4e79e2c131fc1c4139e66f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3a0665e823da7bfc0df78c1979cfd4c3ca72731bad4e79e2c131fc1c4139e66f/userdata/shm major:0 minor:829 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3c61e204454e38428fa04296fdaa0b86068d8df14b3972facff7186f87934a5b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3c61e204454e38428fa04296fdaa0b86068d8df14b3972facff7186f87934a5b/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3cd6d09fe73a460b498f00d76bd556cdb55771a774477420bab191c7dcd68863/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3cd6d09fe73a460b498f00d76bd556cdb55771a774477420bab191c7dcd68863/userdata/shm major:0 minor:1075 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/408fc587f3c1d995e472d57ef08e1448783433be2d773a5e80c2f22fddf79bea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/408fc587f3c1d995e472d57ef08e1448783433be2d773a5e80c2f22fddf79bea/userdata/shm major:0 minor:561 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/42427cdb4004876179dcfbd8f19dca1e35b1708032ece70b1b2417c09bcc6b09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/42427cdb4004876179dcfbd8f19dca1e35b1708032ece70b1b2417c09bcc6b09/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4e41041845987412c5331ff6cc2618d3c5ae42cf3d9f83fd7b71a693c8e76498/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4e41041845987412c5331ff6cc2618d3c5ae42cf3d9f83fd7b71a693c8e76498/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5133c097ddac4c4eb3bf47ec178286cfda103ff21a8e794c8ccd120974cf84fe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5133c097ddac4c4eb3bf47ec178286cfda103ff21a8e794c8ccd120974cf84fe/userdata/shm major:0 minor:322 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5587303dfbff2e0f6e8f88f34bf2533361126f22ec3322ef362bf2e083f2b5d9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5587303dfbff2e0f6e8f88f34bf2533361126f22ec3322ef362bf2e083f2b5d9/userdata/shm major:0 minor:826 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/58fb20f0efe35396beaa43bc3d7cc4b5db2f0e64b1edfa9263cafc7641e2c772/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/58fb20f0efe35396beaa43bc3d7cc4b5db2f0e64b1edfa9263cafc7641e2c772/userdata/shm major:0 minor:1189 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5aefb6138adeb7d46c141d72648e74fb238235b8d8af02bde5beca7c384d92e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5aefb6138adeb7d46c141d72648e74fb238235b8d8af02bde5beca7c384d92e7/userdata/shm major:0 minor:357 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/611f0577f694d16ae6cfdfa887a45e57816d4fedaa4b7733f18258fff60747d7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/611f0577f694d16ae6cfdfa887a45e57816d4fedaa4b7733f18258fff60747d7/userdata/shm major:0 minor:934 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6814e0600083f0996ce4c3d6eefe5646615f1a2b02ab21e27a25e1eb855f75c6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6814e0600083f0996ce4c3d6eefe5646615f1a2b02ab21e27a25e1eb855f75c6/userdata/shm major:0 minor:317 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6937b999e172420380651c53fc5e6680d5943c027cccaefd6221f5dee41afb2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6937b999e172420380651c53fc5e6680d5943c027cccaefd6221f5dee41afb2c/userdata/shm major:0 minor:1080 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b36bbd0455724f4c84a788594d831cdec4b648d0e41f4b0f6e9ae8e3b529de5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b36bbd0455724f4c84a788594d831cdec4b648d0e41f4b0f6e9ae8e3b529de5/userdata/shm major:0 minor:341 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7432a082c2253d23b865426cbd0b7c6fc641fd734bb3b6088975045dd1832638/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7432a082c2253d23b865426cbd0b7c6fc641fd734bb3b6088975045dd1832638/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/757166b43c0c56e8283c67b367d970d37bc2cba347814ca1a8d85ab635b22caa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/757166b43c0c56e8283c67b367d970d37bc2cba347814ca1a8d85ab635b22caa/userdata/shm major:0 minor:955 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79e902522cf9e089c0a0493aeac487bed34c920c85cbed922e6fdff4d7dc7fa4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79e902522cf9e089c0a0493aeac487bed34c920c85cbed922e6fdff4d7dc7fa4/userdata/shm major:0 minor:523 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7bdc639c2478b5c195d66a7791ae65075a49456c359aa49e7fc420db2f85021a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7bdc639c2478b5c195d66a7791ae65075a49456c359aa49e7fc420db2f85021a/userdata/shm major:0 minor:1195 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/819b5de997e19e19a9d977e809d0fb3fdd9648622a344dd4ddd33e56129c529f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/819b5de997e19e19a9d977e809d0fb3fdd9648622a344dd4ddd33e56129c529f/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8928fc78a20804bb52860e947962b354cf91d1529b5deb719ab35788e3ef8791/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8928fc78a20804bb52860e947962b354cf91d1529b5deb719ab35788e3ef8791/userdata/shm major:0 minor:344 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d/userdata/shm major:0 minor:292 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8ee3b1585121acd28cac002efd25a4951438f7aba1490780501fdecb04a7dd12/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8ee3b1585121acd28cac002efd25a4951438f7aba1490780501fdecb04a7dd12/userdata/shm major:0 minor:1071 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90b6bf31b6285b89ba457dc317b7de2db8799afd4d2c378edeab172c14801f77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90b6bf31b6285b89ba457dc317b7de2db8799afd4d2c378edeab172c14801f77/userdata/shm major:0 minor:351 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95f6a209ef68dab4cb5672857aeba51bebac9f6d112d21c7fcd718cb5be803c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95f6a209ef68dab4cb5672857aeba51bebac9f6d112d21c7fcd718cb5be803c7/userdata/shm major:0 minor:847 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a920827df943f06d02da8e8ea819eda5fb31c3dfefaa7f8b86842839ee17dd17/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a920827df943f06d02da8e8ea819eda5fb31c3dfefaa7f8b86842839ee17dd17/userdata/shm major:0 minor:953 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aeeb874811e84346db41fb4fb7b6cad106590322b692edfbf0b6c383addea6a6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aeeb874811e84346db41fb4fb7b6cad106590322b692edfbf0b6c383addea6a6/userdata/shm major:0 minor:332 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba384c9cdc57f87a975d87b2de9f0cfa5598c8a35123c7bc925dcebbf60a5093/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba384c9cdc57f87a975d87b2de9f0cfa5598c8a35123c7bc925dcebbf60a5093/userdata/shm major:0 minor:740 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba9f914f103017d6ef2cf2c16d508f5302ad218dbd57c88fe26f6d74473e9036/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba9f914f103017d6ef2cf2c16d508f5302ad218dbd57c88fe26f6d74473e9036/userdata/shm major:0 minor:838 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c2c1fb4aec553af65176f49e937958c69c931605beee69d28364ee9ba795514f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c2c1fb4aec553af65176f49e937958c69c931605beee69d28364ee9ba795514f/userdata/shm major:0 minor:856 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ce62aa530e9de7b740f93aac76703fc3a80b1ed5e0bbed25b7228c7b762d272f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ce62aa530e9de7b740f93aac76703fc3a80b1ed5e0bbed25b7228c7b762d272f/userdata/shm major:0 minor:462 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cee650f463641d78c2e399a131e5c5cb6dd2c4bd205c9ebc6a4a1814777051c4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cee650f463641d78c2e399a131e5c5cb6dd2c4bd205c9ebc6a4a1814777051c4/userdata/shm major:0 minor:600 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257/userdata/shm major:0 minor:66 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d06b72c6f0371c1b0257ad61f4ae8d069961f5af58fd20925966cfc79d79903d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d06b72c6f0371c1b0257ad61f4ae8d069961f5af58fd20925966cfc79d79903d/userdata/shm major:0 minor:908 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d153f8589c77234f9dc34525d12bab7d6b406888e2e51c22abf001583537f5c4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d153f8589c77234f9dc34525d12bab7d6b406888e2e51c22abf001583537f5c4/userdata/shm major:0 minor:460 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d1ec5df20bed29547ffb1f52c2c4287cab5554fd187df0c227bb31c435fc62a0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d1ec5df20bed29547ffb1f52c2c4287cab5554fd187df0c227bb31c435fc62a0/userdata/shm major:0 minor:495 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598/userdata/shm major:0 minor:48 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da2e551f19738e875d8b4b505223588d9ea94eb7716af7e0ff449212c8514bb4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da2e551f19738e875d8b4b505223588d9ea94eb7716af7e0ff449212c8514bb4/userdata/shm major:0 minor:792 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e924b0646dc2650e31e1b4cadf6eac6293c32b11a283f47d90fa34c50c73d4f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e924b0646dc2650e31e1b4cadf6eac6293c32b11a283f47d90fa34c50c73d4f0/userdata/shm major:0 minor:243 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ea94bf8965f915667b084d40efeb4f5102c63b750c132e105898d2d86dfc6bcf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ea94bf8965f915667b084d40efeb4f5102c63b750c132e105898d2d86dfc6bcf/userdata/shm major:0 minor:852 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eab66404c12034ae89f04e45ade44912e55d6fddf5edcf6fc585e549c9b0d555/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eab66404c12034ae89f04e45ade44912e55d6fddf5edcf6fc585e549c9b0d555/userdata/shm major:0 minor:240 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec43fc3d3a5ac191c7efb625569a2dc8960d02c6765df5d0352ccc2d0da0a0a4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec43fc3d3a5ac191c7efb625569a2dc8960d02c6765df5d0352ccc2d0da0a0a4/userdata/shm major:0 minor:333 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f7c28b40cde4a7aad725d4c7e6669cdabc0febc1e8bf8d8daea1b94e0e12e828/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f7c28b40cde4a7aad725d4c7e6669cdabc0febc1e8bf8d8daea1b94e0e12e828/userdata/shm major:0 minor:809 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f9982a7fe2276ecf5bf8dd3bab737e593501425df536f9820a4bd04690b29d97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f9982a7fe2276ecf5bf8dd3bab737e593501425df536f9820a4bd04690b29d97/userdata/shm major:0 minor:959 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fc42f33929f2a6b9103f7b23ae3ef7d3e614662550ded98a184c1328a4069b14/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fc42f33929f2a6b9103f7b23ae3ef7d3e614662550ded98a184c1328a4069b14/userdata/shm major:0 minor:794 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~projected/kube-api-access-x2hfh:{mountpoint:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~projected/kube-api-access-x2hfh major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09cc190d-5647-40a1-bfe9-5355bcb33b10/volumes/kubernetes.io~projected/kube-api-access-4w5fk:{mountpoint:/var/lib/kubelet/pods/09cc190d-5647-40a1-bfe9-5355bcb33b10/volumes/kubernetes.io~projected/kube-api-access-4w5fk major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0adaea87-67d0-41a7-a1f3-855fdd483aca/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/0adaea87-67d0-41a7-a1f3-855fdd483aca/volumes/kubernetes.io~secret/tls-certificates major:0 minor:835 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~projected/kube-api-access-zbw6q:{mountpoint:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~projected/kube-api-access-zbw6q major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14438c84-72d3-4f45-88a4-fc7e80df5fb8/volumes/kubernetes.io~projected/kube-api-access-dfdkb:{mountpoint:/var/lib/kubelet/pods/14438c84-72d3-4f45-88a4-fc7e80df5fb8/volumes/kubernetes.io~projected/kube-api-access-dfdkb major:0 minor:811 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14438c84-72d3-4f45-88a4-fc7e80df5fb8/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/14438c84-72d3-4f45-88a4-fc7e80df5fb8/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/volumes/kubernetes.io~projected/kube-api-access-7g2ng:{mountpoint:/var/lib/kubelet/pods/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/volumes/kubernetes.io~projected/kube-api-access-7g2ng major:0 minor:910 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:867 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~projected/kube-api-access-svz6j:{mountpoint:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~projected/kube-api-access-svz6j major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/encryption-config major:0 minor:487 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/etcd-client major:0 minor:440 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/serving-cert major:0 minor:489 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~projected/kube-api-access-7thvr:{mountpoint:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~projected/kube-api-access-7thvr major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~projected/kube-api-access-85vjd:{mountpoint:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~projected/kube-api-access-85vjd major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~projected/kube-api-access-mxz2j:{mountpoint:/var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~projected/kube-api-access-mxz2j major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1064 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~projected/kube-api-access-jrdvd:{mountpoint:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~projected/kube-api-access-jrdvd major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/etcd-client major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~projected/kube-api-access-47plx:{mountpoint:/var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~projected/kube-api-access-47plx major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~secret/srv-cert major:0 minor:468 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e/volumes/kubernetes.io~projected/kube-api-access-rp5rd:{mountpoint:/var/lib/kubelet/pods/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e/volumes/kubernetes.io~projected/kube-api-access-rp5rd major:0 minor:803 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e/volumes/kubernetes.io~secret/proxy-tls major:0 minor:802 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~projected/kube-api-access-ssdjz:{mountpoint:/var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~projected/kube-api-access-ssdjz major:0 minor:1069 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1065 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1066 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~projected/kube-api-access-nfmmt:{mountpoint:/var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~projected/kube-api-access-nfmmt major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1067 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~projected/kube-api-access-qvnp7:{mountpoint:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~projected/kube-api-access-qvnp7 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~secret/cert major:0 minor:676 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3eeb72c3-1a56-4955-845e-81607513b1b2/volumes/kubernetes.io~projected/kube-api-access-jns5r:{mountpoint:/var/lib/kubelet/pods/3eeb72c3-1a56-4955-845e-81607513b1b2/volumes/kubernetes.io~projected/kube-api-access-jns5r major:0 minor:350 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~projected/kube-api-access-2w48g:{mountpoint:/var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~projected/kube-api-access-2w48g major:0 minor:1031 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1021 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1030 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fca1a4-4fa7-4a43-b9c4-7f50a8737643/volumes/kubernetes.io~projected/kube-api-access-mbktm:{mountpoint:/var/lib/kubelet/pods/43fca1a4-4fa7-4a43-b9c4-7f50a8737643/volumes/kubernetes.io~projected/kube-api-access-mbktm major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~projected/kube-api-access-8l8cg:{mountpoint:/var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~projected/kube-api-access-8l8cg major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~secret/metrics-tls major:0 minor:513 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~projected/kube-api-access major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~projected/kube-api-access-wpcnv:{mountpoint:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~projected/kube-api-access-wpcnv major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~secret/webhook-cert major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a73a5b0-478f-496d-8b0c-9e3daf39c082/volumes/kubernetes.io~projected/kube-api-access-qtj5f:{mountpoint:/var/lib/kubelet/pods/4a73a5b0-478f-496d-8b0c-9e3daf39c082/volumes/kubernetes.io~projected/kube-api-access-qtj5f major:0 minor:1194 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4a73a5b0-478f-496d-8b0c-9e3daf39c082/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/4a73a5b0-478f-496d-8b0c-9e3daf39c082/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1193 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~projected/kube-api-access-8s7rj:{mountpoint:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~projected/kube-api-access-8s7rj major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~projected/kube-api-access major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~projected/kube-api-access-gtjps:{mountpoint:/var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~projected/kube-api-access-gtjps major:0 minor:836 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:834 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~secret/webhook-cert major:0 minor:832 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~projected/kube-api-access-t6t27:{mountpoint:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~projected/kube-api-access-t6t27 major:0 minor:594 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/encryption-config major:0 minor:593 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/etcd-client major:0 minor:592 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/serving-cert major:0 minor:588 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56365780-b87d-43fc-95f5-8a44166aecf8/volumes/kubernetes.io~projected/kube-api-access-5rzx9:{mountpoint:/var/lib/kubelet/pods/56365780-b87d-43fc-95f5-8a44166aecf8/volumes/kubernetes.io~projected/kube-api-access-5rzx9 major:0 minor:552 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56365780-b87d-43fc-95f5-8a44166aecf8/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/56365780-b87d-43fc-95f5-8a44166aecf8/volumes/kubernetes.io~secret/metrics-tls major:0 minor:599 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~projected/kube-api-access-flln7:{mountpoint:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~projected/kube-api-access-flln7 major:0 minor:845 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/default-certificate major:0 minor:812 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/metrics-certs major:0 minor:839 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/stats-auth major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~projected/kube-api-access-4xjhk:{mountpoint:/var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~projected/kube-api-access-4xjhk major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:672 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~projected/kube-api-access-p4jnj:{mountpoint:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~projected/kube-api-access-p4jnj major:0 minor:1110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1109 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1106 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5/volumes/kubernetes.io~projected/kube-api-access-g8p7b:{mountpoint:/var/lib/kubelet/pods/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5/volumes/kubernetes.io~projected/kube-api-access-g8p7b major:0 minor:791 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/60683578-6673-4aff-b1d5-3167d534ac08/volumes/kubernetes.io~projected/kube-api-access-zcmdk:{mountpoint:/var/lib/kubelet/pods/60683578-6673-4aff-b1d5-3167d534ac08/volumes/kubernetes.io~projected/kube-api-access-zcmdk major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/672ad0aa-a0c5-4640-840d-3ffa02c55d62/volumes/kubernetes.io~projected/kube-api-access-t58zw:{mountpoint:/var/lib/kubelet/pods/672ad0aa-a0c5-4640-840d-3ffa02c55d62/volumes/kubernetes.io~projected/kube-api-access-t58zw major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~projected/kube-api-access-h925l:{mountpoint:/var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~projected/kube-api-access-h925l major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:673 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67e5534b-f428-45cf-b54e-d06b25dc3e09/volumes/kubernetes.io~projected/kube-api-access-s45nc:{mountpoint:/var/lib/kubelet/pods/67e5534b-f428-45cf-b54e-d06b25dc3e09/volumes/kubernetes.io~projected/kube-api-access-s45nc major:0 minor:887 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/67e5534b-f428-45cf-b54e-d06b25dc3e09/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/67e5534b-f428-45cf-b54e-d06b25dc3e09/volumes/kubernetes.io~secret/proxy-tls major:0 minor:884 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cc45721-c05b-4161-91d9-d65cf6ec61d4/volumes/kubernetes.io~projected/kube-api-access-k6t9w:{mountpoint:/var/lib/kubelet/pods/6cc45721-c05b-4161-91d9-d65cf6ec61d4/volumes/kubernetes.io~projected/kube-api-access-k6t9w major:0 minor:321 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed4ce2b-080f-4523-8527-eee768e06123/volumes/kubernetes.io~projected/kube-api-access-nql4h:{mountpoint:/var/lib/kubelet/pods/6ed4ce2b-080f-4523-8527-eee768e06123/volumes/kubernetes.io~projected/kube-api-access-nql4h major:0 minor:810 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ed4ce2b-080f-4523-8527-eee768e06123/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/6ed4ce2b-080f-4523-8527-eee768e06123/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~projected/kube-api-access-tr4bl:{mountpoint:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~projected/kube-api-access-tr4bl major:0 minor:313 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~secret/serving-cert major:0 minor:316 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~projected/kube-api-access-bnxk9:{mountpoint:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~projected/kube-api-access-bnxk9 major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/72756f50-c970-4ef6-b8ca-88e49f996a74/volumes/kubernetes.io~projected/kube-api-access-zxn9l:{mountpoint:/var/lib/kubelet/pods/72756f50-c970-4ef6-b8ca-88e49f996a74/volumes/kubernetes.io~projected/kube-api-access-zxn9l major:0 minor:790 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/volumes/kubernetes.io~projected/kube-api-access-s9tpx:{mountpoint:/var/lib/kubelet/pods/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/volumes/kubernetes.io~projected/kube-api-access-s9tpx major:0 minor:837 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/volumes/kubernetes.io~secret/cert major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/kube-api-access-rbzvl:{mountpoint:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/kube-api-access-rbzvl major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:675 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d3fd276-2fe2-423a-b1ee-f27f1596d013/volumes/kubernetes.io~projected/kube-api-access-cqc86:{mountpoint:/var/lib/kubelet/pods/9d3fd276-2fe2-423a-b1ee-f27f1596d013/volumes/kubernetes.io~projected/kube-api-access-cqc86 major:0 minor:349 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d3fd276-2fe2-423a-b1ee-f27f1596d013/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/9d3fd276-2fe2-423a-b1ee-f27f1596d013/volumes/kubernetes.io~secret/cert major:0 minor:348 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~projected/kube-api-access-m8b7s:{mountpoint:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~projected/kube-api-access-m8b7s major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.i Mar 19 09:33:35.176238 master-0 kubenswrapper[27819]: o~secret/apiservice-cert major:0 minor:511 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:514 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a591384f-f83e-4f65-b5d0-d519f05edbd9/volumes/kubernetes.io~projected/kube-api-access-vbmx9:{mountpoint:/var/lib/kubelet/pods/a591384f-f83e-4f65-b5d0-d519f05edbd9/volumes/kubernetes.io~projected/kube-api-access-vbmx9 major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~projected/kube-api-access-c654s:{mountpoint:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~projected/kube-api-access-c654s major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b42aee2f-bffc-4c43-bf20-16d9c67d216c/volumes/kubernetes.io~projected/kube-api-access-lbvbr:{mountpoint:/var/lib/kubelet/pods/b42aee2f-bffc-4c43-bf20-16d9c67d216c/volumes/kubernetes.io~projected/kube-api-access-lbvbr major:0 minor:841 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~projected/kube-api-access-smvtc:{mountpoint:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~projected/kube-api-access-smvtc major:0 minor:103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~secret/metrics-tls major:0 minor:102 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~projected/kube-api-access-dt99t:{mountpoint:/var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~projected/kube-api-access-dt99t major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~secret/metrics-certs major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~projected/kube-api-access-cjnjq:{mountpoint:/var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~projected/kube-api-access-cjnjq major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~secret/proxy-tls major:0 minor:671 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3610f08-aba1-411d-aa6d-811b88acdb7b/volumes/kubernetes.io~projected/kube-api-access-jdgvx:{mountpoint:/var/lib/kubelet/pods/c3610f08-aba1-411d-aa6d-811b88acdb7b/volumes/kubernetes.io~projected/kube-api-access-jdgvx major:0 minor:822 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3610f08-aba1-411d-aa6d-811b88acdb7b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/c3610f08-aba1-411d-aa6d-811b88acdb7b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:820 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cd1425b9-fcd1-4aba-899f-e110eebce626/volumes/kubernetes.io~projected/kube-api-access-s2vbp:{mountpoint:/var/lib/kubelet/pods/cd1425b9-fcd1-4aba-899f-e110eebce626/volumes/kubernetes.io~projected/kube-api-access-s2vbp major:0 minor:821 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cd1425b9-fcd1-4aba-899f-e110eebce626/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/cd1425b9-fcd1-4aba-899f-e110eebce626/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:817 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~projected/kube-api-access-4tfnn:{mountpoint:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~projected/kube-api-access-4tfnn major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cef53432-93f5-4581-b3de-c8cc5cac2ecb/volumes/kubernetes.io~projected/kube-api-access-sm9vh:{mountpoint:/var/lib/kubelet/pods/cef53432-93f5-4581-b3de-c8cc5cac2ecb/volumes/kubernetes.io~projected/kube-api-access-sm9vh major:0 minor:823 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cef53432-93f5-4581-b3de-c8cc5cac2ecb/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/cef53432-93f5-4581-b3de-c8cc5cac2ecb/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:818 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d504cbc7-5c09-4712-9f7a-c41a6386ef79/volumes/kubernetes.io~projected/kube-api-access-tmwbr:{mountpoint:/var/lib/kubelet/pods/d504cbc7-5c09-4712-9f7a-c41a6386ef79/volumes/kubernetes.io~projected/kube-api-access-tmwbr major:0 minor:784 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~projected/ca-certs major:0 minor:452 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~projected/kube-api-access-bs6m8:{mountpoint:/var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~projected/kube-api-access-bs6m8 major:0 minor:450 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:494 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5d9fbaf-ba14-4d2b-8376-1634eabbc782/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/d5d9fbaf-ba14-4d2b-8376-1634eabbc782/volumes/kubernetes.io~projected/ca-certs major:0 minor:451 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5d9fbaf-ba14-4d2b-8376-1634eabbc782/volumes/kubernetes.io~projected/kube-api-access-rrmjf:{mountpoint:/var/lib/kubelet/pods/d5d9fbaf-ba14-4d2b-8376-1634eabbc782/volumes/kubernetes.io~projected/kube-api-access-rrmjf major:0 minor:453 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~projected/kube-api-access-hccqk:{mountpoint:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~projected/kube-api-access-hccqk major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/kube-api-access-x4n26:{mountpoint:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/kube-api-access-x4n26 major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:674 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~projected/kube-api-access-lgrjz:{mountpoint:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~projected/kube-api-access-lgrjz major:0 minor:1188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1186 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:571 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~empty-dir/tmp major:0 minor:582 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~projected/kube-api-access-npg9k:{mountpoint:/var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~projected/kube-api-access-npg9k major:0 minor:583 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/volumes/kubernetes.io~projected/kube-api-access-k5hmg:{mountpoint:/var/lib/kubelet/pods/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/volumes/kubernetes.io~projected/kube-api-access-k5hmg major:0 minor:360 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ded5da9a-1447-46df-a8ff-ffd469562599/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ded5da9a-1447-46df-a8ff-ffd469562599/volumes/kubernetes.io~projected/kube-api-access major:0 minor:555 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ded5da9a-1447-46df-a8ff-ffd469562599/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ded5da9a-1447-46df-a8ff-ffd469562599/volumes/kubernetes.io~secret/serving-cert major:0 minor:554 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~projected/kube-api-access-gmqts:{mountpoint:/var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~projected/kube-api-access-gmqts major:0 minor:921 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~secret/certs major:0 minor:842 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:911 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~projected/kube-api-access-2svkc:{mountpoint:/var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~projected/kube-api-access-2svkc major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:448 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~projected/kube-api-access-tll8k:{mountpoint:/var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~projected/kube-api-access-tll8k major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~secret/srv-cert major:0 minor:449 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8a7e077-3f6c-4efb-9865-cf82480c5da1/volumes/kubernetes.io~projected/kube-api-access-mncvz:{mountpoint:/var/lib/kubelet/pods/e8a7e077-3f6c-4efb-9865-cf82480c5da1/volumes/kubernetes.io~projected/kube-api-access-mncvz major:0 minor:789 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~projected/kube-api-access-r8bm4:{mountpoint:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~projected/kube-api-access-r8bm4 major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fed75514-8f48-40b7-9fed-0afd6042cfbf/volumes/kubernetes.io~projected/kube-api-access-h9t7v:{mountpoint:/var/lib/kubelet/pods/fed75514-8f48-40b7-9fed-0afd6042cfbf/volumes/kubernetes.io~projected/kube-api-access-h9t7v major:0 minor:459 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fed75514-8f48-40b7-9fed-0afd6042cfbf/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/fed75514-8f48-40b7-9fed-0afd6042cfbf/volumes/kubernetes.io~secret/signing-key major:0 minor:458 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/2904620dda59a0bcdd59d19b18e02e96e9bfe0c84f2a88f01e0606a73cab5341/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-1012:{mountpoint:/var/lib/containers/storage/overlay/80317eec5e403b80017d4528ae0e5d2b3046493577adb0f693020c49d244dc84/merged major:0 minor:1012 fsType:overlay blockSize:0} overlay_0-1017:{mountpoint:/var/lib/containers/storage/overlay/3dc099c1fca9f1bd9a2aef87b46b90beff266b2bfc6537e71168164a809211d4/merged major:0 minor:1017 fsType:overlay blockSize:0} overlay_0-1019:{mountpoint:/var/lib/containers/storage/overlay/188b65552f1dc24e2e10404ba4eeb3a83b59f37ea430d09304d7cc899c417634/merged major:0 minor:1019 fsType:overlay blockSize:0} overlay_0-1050:{mountpoint:/var/lib/containers/storage/overlay/04853a7e20d7fcff78a33d9033d2f787e9d250eb23909dac2cc0daba9a0963bb/merged major:0 minor:1050 fsType:overlay blockSize:0} overlay_0-1052:{mountpoint:/var/lib/containers/storage/overlay/8e45283b9076110dab6d5202349dabb546b25182a7ce191e61d69cffc03099b1/merged major:0 minor:1052 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/3527e7e7fe2119f5121040d37ae666efb9413b50df962a8a7d9ca645632f52a8/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/ca39386eee2590b86c0a99acb21d0282c1275e254621fe3038c285dd41f41ee0/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-1073:{mountpoint:/var/lib/containers/storage/overlay/f2ab48c73e0e0aaea1249961d140ab7b892d5c7e35ba4143bda4fac66f76cce0/merged major:0 minor:1073 fsType:overlay blockSize:0} overlay_0-1078:{mountpoint:/var/lib/containers/storage/overlay/048c790ee686f17c4814d24e78048f522b7dc3db6ebba0f8d07c99f6bc55ffc6/merged major:0 minor:1078 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/4f54f4b2687d64b678cf6b18f88e2ac5823b4c03d5fa6ac3cd9f9283717f1340/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/3edb2d4d53ae5a0167b282e9a71f41406eb675485018475060cdaac5f06fd71b/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1091:{mountpoint:/var/lib/containers/storage/overlay/b454086c7cb04d772a5475cf9810c44dc7072990a0baf0de3fce7c87e2b68572/merged major:0 minor:1091 fsType:overlay blockSize:0} overlay_0-1093:{mountpoint:/var/lib/containers/storage/overlay/26a599feb710d801d2f4c64b3b7652d02cefc5ee52d51b1e929a7f19c8be9e81/merged major:0 minor:1093 fsType:overlay blockSize:0} overlay_0-1095:{mountpoint:/var/lib/containers/storage/overlay/5fece8f9f79cd457948adf9581c666460bae6af35908def84caa719400fd7c4c/merged major:0 minor:1095 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/5140e38a04db1e9ed84f64de2900225ebcda5a21125604521cba334e3be7ade4/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1116:{mountpoint:/var/lib/containers/storage/overlay/73e958fb62a4a8d088cb467a2c7c9546bba1d29eb80325e6d5e3cbc165f3bae1/merged major:0 minor:1116 fsType:overlay blockSize:0} overlay_0-1118:{mountpoint:/var/lib/containers/storage/overlay/9ebdfbbe4080aa1a166a6ca092b1f6d79b6b2daa105d9d3b18b4afd952059a74/merged major:0 minor:1118 fsType:overlay blockSize:0} overlay_0-1119:{mountpoint:/var/lib/containers/storage/overlay/e3978892bb9dcf3264a246e8cbf165d8bf09668580cba84686e2162b37b0a232/merged major:0 minor:1119 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/ce0342f08be1ad677881da1fb3452c8c36e4d67ddab2383634906b443ecb8d62/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-1128:{mountpoint:/var/lib/containers/storage/overlay/37dadaa5989d36f4c8dc25df5b51abe703fc0b282f763a865dd35c7d5fe24180/merged major:0 minor:1128 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/dd556d20a138322f07604e64d9e37cf332a16454787a89a24b5a675291246bc6/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/22c1fbc5b96f1c51ea666ab7b45287e0a78c75af88140005e06acc8c5686c8a6/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-1146:{mountpoint:/var/lib/containers/storage/overlay/bcf435bddf04d178fbdf5f97f7d0be9f5ab962b51310a7525ce6220835035c9f/merged major:0 minor:1146 fsType:overlay blockSize:0} overlay_0-1191:{mountpoint:/var/lib/containers/storage/overlay/b714b01e28124502e6b9100f89baf50b2b039eefea5c728673e1b195b1c34eec/merged major:0 minor:1191 fsType:overlay blockSize:0} overlay_0-1197:{mountpoint:/var/lib/containers/storage/overlay/f4be72f031ce2e0537a0b72e2157f4339802d2bdaafa1eb051f5432b634ce968/merged major:0 minor:1197 fsType:overlay blockSize:0} overlay_0-1199:{mountpoint:/var/lib/containers/storage/overlay/c6c7372e7de93d12f28a54b571c24c2a8cc268f7de341f63a9331c3ba8615215/merged major:0 minor:1199 fsType:overlay blockSize:0} overlay_0-1205:{mountpoint:/var/lib/containers/storage/overlay/4e55da4a745a9bfab7c4659c2eb4bea1eaca1b44abbd99f3747b609128bdbd49/merged major:0 minor:1205 fsType:overlay blockSize:0} overlay_0-1207:{mountpoint:/var/lib/containers/storage/overlay/6aa690e164bad2e5bae727e1f17e0e2f913ef1319970cd97bfe2374cc6bc18eb/merged major:0 minor:1207 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/ea95dfb0f637454790e996bca5f05544200a8893caf3bebe22d9b0aaa8ab9cd0/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/66fdd1d95ba0cdac8c3a4b718f78204ca09b28a17654a9202a475ff1ad6ce07d/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/4c55a61835eaa8ea2f6ba609e47d5fde6f7e1042ab1ee78896c0396d5cc9c28e/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/662b8e86db31eaf0280c1f823ae78c9798f8b0aa9aefa8f7fa85a2a523919c07/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/a27374ce4ecd442d427d7793911d2be44cd2b6f573055540f6cd5347a3d5d1d0/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/9dc8b2540527dc86a59db8a8d04290eff7d4d6a10569534c7ed642d46ee5f99a/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-153:{mountpoint:/var/lib/containers/storage/overlay/400a88756da15a89814d9064cd8867af2d3b248a9f708b2d5e30e09b796dee04/merged major:0 minor:153 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/9494e9dccc5b85677cd6010f8166c04283b373c2f0b15415c3605eeb1ff3e1d9/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/e8676547543c087854b294f0b5a5011e84617557e66bf4a7778f3a8ddca85939/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/8ce953a703adb3190036eb90c0274dbf8b18e39e39487de5d5182d916a189b17/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/9f4ef13e2a16bc215dcf1db9b6805b6a8fa43ab3000de0680df181dd64d3434d/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/74e4df1edaea55b7254f6da06dc5aab31a6aa44d63aeb4954c94ba7e4b6c23e5/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/dbcbb63f73e77f44ffdcd28f8d1c7f4016cf49787c7484ae5ee2615deb92f109/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/6430b9f83e64d4f4df986656d12d3b189bb6439165ae5663d6c8645c0ca71aac/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/83850342d3de4dee842d0530d52a953a1d0ca66739cc3837f15d3fe687347a70/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/d1663bc1f9a653636bcc8ddfa58552fc7fbdd6fc29dddb07391195367829528c/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/8d8bcf09de56103234b5b43dbde5d875962fa65b1af5862d21ec60fff264550c/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-251:{mountpoint:/var/lib/containers/storage/overlay/91d1d35772902720a039771c2b348d76f81adeacf4e9363f00e52d88579745b3/merged major:0 minor:251 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/8a917b909955e4c8a038cf09eaf92798d2be9aca55c80d573ce0f1d7a064c02f/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-266:{mountpoint:/var/lib/containers/storage/overlay/d9607c509c265ac6aadf2928e0954775b15d6a0d1b7e58f75452ec0573d57041/merged major:0 minor:266 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/c5a19326b9b6995f871cdaac34c49cbff0f75a5b169cf0c1099a1e4ac2d44436/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/ee08dbac3cdbaa14412c1920efa7e12316a52bff5711479e93824f4c85099d1a/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/c55a17798997026925a872ac772cd65dc48ad4757ace7594bd50d629497713b2/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/0c296d31575c30a6fd89d3b4201b986e85ed3cc3e030fdc9208455397feb2c1d/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/8b6c67976d436b15ec4184506ed0c5d480c905a03ab359159030bbc3e3621452/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/c76a3d692606db9bfd36450cfa72529c219dfa7ff18fc95754683f3daec3854a/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/e128a3d25d1c0d399dd979135719edbc61f916cedccaa663403b4da33a37031c/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/0b6c9caad65671be7f4fea40ffc6d71e159823658021ed1608695cae77b7c685/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/5cd4f9d50bc5b8c686ad6a532716609cfc2d99b9d115a3c6b863ae1115c71420/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/b0a97dcd648b7ee621dbf960ce554aaaf6819be74a70ef2078ed795ef815c737/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/96eac3dc8c82fef6c5ccad5bf64c7018253c61217eb3dd430af24d33fe888e8e/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/0c51e2a0d087074f6fc972e9cb72d7a93cb1edb58db4e5db7167ab87ed14e73b/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/88cf48e33945613db7877ef2abc1f8225822ad30e0916cc5d4d55e94973ba95f/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/e3e0a8aebaebd0d3c6de2c991fdca0573c7740c309258ac24d4c17e17301ebb7/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/54c4d527eb4b16f467cc8c1d395f975603ef380be4d9445a4b1436be933d2989/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/d1bab8faeed2c6b5d4fa00ee4a9279ba80621967ac31f8ad3cd309bb62eff047/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/ff580d88ade5682b85fa8bda9e61e02d018e35f56bbbd3f448461a3886ce6454/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/b2afef22072b0e2a3537ce5298934ddb578afa97e4c89fa5502fb814c60335a4/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/b738f679eddaef915c706c45983c4e62b60db977ba3c1d2f2e97e246963036df/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/ebf46bdd4fdacea05a94276e145d135840f5d2ce9bfdfbec32208d0a148c8f29/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-361:{mountpoint:/var/lib/containers/storage/overlay/67b8b3ac4b65c235cbe91e865ea67197e471207920dfce4cae8786520c118444/merged major:0 minor:361 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/cb68f3cee4c9f4f1474108bbac8d2d34ce3b52a531bc0866d01ec328ca89e23a/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-365:{mountpoint:/var/lib/containers/storage/overlay/449fd4dee7fa5df3205c1c8d3ec3f8be3e9faeb741c9ffa256e1c73038345014/merged major:0 minor:365 fsType:overlay blockSize:0} overlay_0-377:{mountpoint:/var/lib/containers/storage/overlay/2f4d7b9eeb539e787c5195d9bb186154d9975d1f6f3b9c881a2ec82deb86abf7/merged major:0 minor:377 fsType:overlay blockSize:0} overlay_0-382:{mountpoint:/var/lib/containers/storage/overlay/8ce80e174437a0707c8c4dbf80a834e54db45346d5496b01ddc3389f21d9cb43/merged major:0 minor:382 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/4ea149f88f3111289bee0c007d9897da3b4984bfa8e4e4b8cac866750b0292ac/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/ce8066c10825eb55579c5b22211c2481420f8df04ae2598e7f42439952979360/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/cb7799d218a57dc03510c695203b5f2a9c2b06a7286f78d57587762edb964219/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-395:{mountpoint:/var/lib/containers/storage/overlay/3863dc1cb04289f9b9d49fdd562affabeed6ad9c8ae2f66afe082a4f32b24fc0/merged major:0 minor:395 fsType:overlay blockSize:0} overlay_0-396:{mountpoint:/var/lib/containers/storage/overlay/5c81ebf61d9590bda7514a56ad03ba2d9b0ea80e15cabfc47395e1e3e8d66f0a/merged major:0 minor:396 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/6444b9d553182f1adbe9173a8a292126d2fdaebeaf744b4ef7776bc2b8096de5/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-398:{mountpoint:/var/lib/containers/storage/overlay/26e743cba1d87f24c2a2224c36233a7fd32adc7d2c3dbcc2bdb1a3457d7d11b2/merged major:0 minor:398 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/65bdeab08854042b4a8b059ea1d3e5b60074f6ea431aff44028efd43d8270d06/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-402:{mountpoint:/var/lib/containers/storage/overlay/796beb7d0d2377cddf741379209f282f56366e33f127b12eac2e360e19e4b17c/merged major:0 minor:402 fsType:overlay blockSize:0} overlay_0-404:{mountpoint:/var/lib/containers/storage/overlay/b5d379d16e68cf8cdeb42938781b1f66cf7a815a4b568599e2d2d1b167939d63/merged major:0 minor:404 fsType:overlay blockSize:0} overlay_0-405:{mountpoint:/var/lib/containers/storage/overlay/347d81a528281f9f50944e1a7699dd3375885ed56f8d143831b04a7dc93bc9ce/merged major:0 minor:405 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/0454ff7f52073d57ae389537928e65d34a3d14be1e894ce7b13240b870d15057/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-417:{mountpoint:/var/lib/containers/storage/overlay/8417166331bc14af47cb6dd4e91c987e6a568965b5d7c1fd7bf016f73b79b24c/merged major:0 minor:417 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/f1f12b7da7f4ea75ab0553be038d626f29b4aed16de7d549da47a7a5f53ec795/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-427:{mountpoint:/var/lib/containers/storage/overlay/fdc23ae6be3064bfd5d1dff435aa060f9fbc23aac471bc8c038844c559632cb8/merged major:0 minor:427 fsType:overlay blockSize:0} overlay_0-429:{mountpoint:/var/lib/containers/storage/overlay/a4917c3caa7411ce7513dc635c81cbe13203e23df11becb5316fdba3fe29b662/merged major:0 minor:429 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/663848ec28dfb990004121c3ea4e622185a4b9c319eb0ab0a2c199a530f62e1c/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/8badbb4bc2ef4d80f7b2ffb26767f2e6a0286ea76ec5cced5bf5e1a550256fb8/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/a9b973500e27ccb2428d3efabde2ec9c3c77369043a91673b0e4c1345d3575b3/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-441:{mountpoint:/var/lib/containers/storage/overlay/94b5f3b332147b2f0d154519c8bfffcbd6b30add87db2f4efac3d30fa6cc1d9d/merged major:0 minor:441 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/16f9ec4ec964e52d38f81ef372c7176a2325bc30ff4d568b6bbc584bb0697586/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/766f20beac6e9511991841f3d70badffa2fd7a9850de119c5eb103cc8a78a525/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-464:{mountpoint:/var/lib/containers/storage/overlay/ec02fed1ca115c495d5f2e98245810306615959f7cf3302bbee64a9ef648e6d0/merged major:0 minor:464 fsType:overlay blockSize:0} overlay_0-469:{mountpoint:/var/lib/containers/storage/overlay/5c3e68037704ddd1edca70e8b28bdfbad454a1dacfffb7415d138caedcd252b7/merged major:0 minor:469 fsType:overlay blockSize:0} overlay_0-471:{mountpoint:/var/lib/containers/storage/overlay/86c4058226cb95daa5a22f709125245b2386afe8bc5fb5a73492051d44db60d3/merged major:0 minor:471 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/e50fcbbc061230191f389f2ca30a8a7a08082c9cca8889038e6bb94bf5e3afe7/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-497:{mountpoint:/var/lib/containers/storage/overlay/a8908fb9ca65dee4bdff6cddc83ea02b4cc00b5985f314be7f0b4cf17aa43f50/merged major:0 minor:497 fsType:overlay blockSize:0} overlay_0-499:{mountpoint:/var/lib/containers/storage/overlay/d1392792451160723af70c576d1135cbdd4e8fc99958b28c67943dc0b8e23a0c/merged major:0 minor:499 fsType:overlay blockSize:0} overlay_0-510:{mountpoint:/var/lib/containers/storage/overlay/0f3f3d843e3a390c5b72028dd0252c6e4c19c1f820802842b8d6b52ef5a603f1/merged major:0 minor:510 fsType:overlay blockSize:0} overlay_0-520:{mountpoint:/var/lib/containers/storage/overlay/b3f3c943261bbdf05b8dadee06b18f35bc47734dd52ef76f4068ec261ed86396/merged major:0 minor:520 fsType:overlay blockSize:0} overlay_0-526:{mountpoint:/var/lib/containers/storage/overlay/2689d0b9b7a1b9024910980683bf10286e3f77fb16698f3a9b0ed46492719957/merged major:0 minor:526 fsType:overlay blockSize:0} overlay_0-535:{mountpoint:/var/lib/containers/storage/overlay/68a0b2c97cbed9beb96b2ab6dd1c29d4b7a2311bb07d789a7507608272807da8/merged major:0 minor:535 fsType:overlay blockSize:0} overlay_0-537:{mountpoint:/var/lib/containers/storage/overlay/31b290d1418e265fa31a8fda42217fe67ca99f5007869a2adc2920a6cf34560b/merged major:0 minor:537 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/1b9db7c47cc0cd749c11fe6154a74e32a97cb0620df5c6b5bc3aeaba2dff9a4f/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/3af0e01f53013627a002eff422c7050d0081519e8f9e5ff138348b0f6b0f7fd9/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/acf11587e96886231970d93d45bd97efd147a68ca12aa840e679814ebbe18f43/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-563:{mountpoint:/var/lib/containers/storage/overlay/2af2d7babe2a596505c82255d0347c5b23ade7921526c70e5590f9c9958e44c7/merged major:0 minor:563 fsType:overlay blockSize:0} overlay_0-565:{mountpoint:/var/lib/containers/storage/overlay/bb0eafec42791511af511ea533d4b8ca383d2bf7ff921c9a0dd805a01778bb2c/merged major:0 minor:565 fsType:overlay blockSize:0} overlay_0-567:{mountpoint:/var/lib/containers/storage/overlay/9c0ae6a592a71ce3d1ed475e09f14de9685ed3d112b12271a3e3b3e9c0a56c57/merged major:0 minor:567 fsType:overlay blockSize:0} overlay_0-575:{mountpoint:/var/lib/containers/storage/overlay/b5aa735d600271db4f1b057880967589bdfc110ef5db611bfd2dcde6d2ec7251/merged major:0 minor:575 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/fd6ea2e4c781a5a9cd9112f32f985d9b9b6d3937212f9e6cc4e10f3d6bbd0a69/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-586:{mountpoint:/var/lib/containers/storage/overlay/5bdff70a115cd545a6dad8820b58c532fae12b58925473475540f65c605d341f/merged major:0 minor:586 fsType:overlay blockSize:0} overlay_0-597:{mountpoint:/var/lib/containers/storage/overlay/bd7014d0766ffe69a31f8264750572f569853fec881bc550fa95303f32cc3d8b/merged major:0 minor:597 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/6df828bc840f0e492ebb3d24b688078b7ed7e73ccc58f6102e11f06f992ab8de/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/04d5cb3d4978f72a35e76a9284eb26ee2194a524aaa778cae93a46b4ecbfd055/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-61:{mountpoint:/var/lib/containers/storage/overlay/ec2293d71c8b89ce29a8d179cd43218f98e5962d29051f5005eaac433fb604a2/merged major:0 minor:61 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/be74c4d4c427cf77a93faf39860473caef7608055f252ceb71f8b279a21e1edd/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/a4970cf0ec191ca0d25095768fab087cf2081c81398a70ec19ce141314f8086b/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-613:{mountpoint:/var/lib/containers/storage/overlay/f48d4df368ec981761b11b8e7b2dce77e752a721ef05e9ba5e56243c1a9a952e/merged major:0 minor:613 fsType:overlay blockSize:0} overlay_0-616:{mountpoint:/var/lib/containers/storage/overlay/1df6c1b2884cfc3a71a888d37286041ce9c2660bf9ccdacce6bf507a50dde371/merged major:0 minor:616 fsType:overlay blockSize:0} overlay_0-618:{mountpoint:/var/lib/containers/storage/overlay/dfa81366ba2d24f1f864a1c2974713eb63fa93594ca2501310028cd09dd72501/merged major:0 minor:618 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/aff307319112c1a29cc91d081ccf6bea6bdeb67c41236fbb64da4e02ef4aeb03/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-620:{mountpoint:/var/lib/containers/storage/overlay/1bb210668a74a3aef875a6842508dc7fe12e4e41340be72b88758e133905e3c5/merged major:0 minor:620 fsType:overlay blockSize:0} overlay_0-638:{mountpoint:/var/lib/containers/storage/overlay/544fbc1839bc1aca2157147c5ca7219b077d6fb51691ddc97ab4c15abe2ad04f/merged major:0 minor:638 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/d4a3996f01e10ca714169ddedf1703b6fd8bcf2e80df12de2726234b6800b95e/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/d88a441b8705ab0e4c45c90c034d679888ae6d83087c192c7fe3d97602911e60/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-643:{mountpoint:/var/lib/containers/storage/overlay/37c09e642696ab39b4b1f45fac0e4407515a35d89789aa4ebff456d81e09a6eb/merged major:0 minor:643 fsType:overlay blockSize:0} overlay_0-663:{mountpoint:/var/lib/containers/storage/overlay/b576de5f547edee4c3649c50389631815139b74314f9e01a1a828302d6c9c25e/merged major:0 minor:663 fsType:overlay blockSize:0} overlay_0-665:{mountpoint:/var/lib/containers/storage/overlay/403294e7346e0d4981a9c43982fed3ba67fbaba2d1e98a13454bad588be7dd8a/merged major:0 minor:665 fsType:overlay blockSize:0} overlay_0-683:{mountpoint:/var/lib/containers/storage/overlay/e64f93233674b0f8804a77d00be27e3be8acd8ea0b1dc87f7d5341981022f513/merged major:0 minor:683 fsType:overlay blockSize:0} overlay_0-685:{mountpoint:/var/lib/containers/storage/overlay/5c9063344ccde0c539023d81a11a08afd3f902d04d0830ca7ca9c30cb2bdccd1/merged major:0 minor:685 fsType:overlay blockSize:0} overlay_0-687:{mountpoint:/var/lib/containers/storage/overlay/a6e3f567109aac79c1151ea67eb9023ddd049f64504790449e95742c37c36efb/merged major:0 minor:687 fsType:overlay blockSize:0} overlay_0-693:{mountpoint:/var/lib/containers/storage/overlay/daa99d914d4690df46a439a502d7f01b72cb43a260395964a0c4c3c2e95a3c94/merged major:0 minor:693 fsType:overlay blockSize:0} overlay_0-695:{mountpoint:/var/lib/containers/storage/overlay/ecc5fdd079df5264a07ba2caff1cfc62232bdc3e33caa6e0fcffc4e110d41895/merged major:0 minor:695 fsType:overlay blockSize:0} overlay_0-697:{mountpoint:/var/lib/containers/storage/overlay/a3a25018970c7f3348041b7c3ffbbf9cc1da16454d3edce66108eb86d28ab1f1/merged major:0 minor:697 fsType:overlay blockSize:0} overlay_0-701:{mountpoint:/var/lib/containers/storage/overlay/977b7e00a31300bc67e9bc8a424d22010d6b3a3cebbeb77e4266f9c9a1890b45/merged major:0 minor:701 fsType:overlay blockSize:0} overlay_0-705:{mountpoint:/var/lib/containers/storage/overlay/6e805fdf228b0e9ab2dd74a0101552594324562c8f7c94455bee18847e288479/merged major:0 minor:705 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/d354b93cdb9fc2fde351836485d1475f604fbd11253594be56682d3655b5d76a/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-717:{mountpoint:/var/lib/containers/storage/overlay/b8acd7a822d9d3cb161e5c6865b7fb8b92b780bdaa06866c47fcf416d2a4cef0/merged major:0 minor:717 fsType:overlay blockSize:0} overlay_0-719:{mountpoint:/var/lib/containers/storage/overlay/f12c0a680875b926f27986b6358318363dcd1f5094a64d33dd8c36fbe0327576/merged major:0 minor:719 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/2100ca62f24210253cd217ad2fe6242fd0facf8d6536436f88dd4d6fda8b267a/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-720:{mountpoint:/var/lib/containers/storage/overlay/60917d3935adf52c2c14b627ee4d0481a603e277ab487dee6d8b9f26db582f2c/merged major:0 minor:720 fsType:overlay blockSize:0} overlay_0-722:{mountpoint:/var/lib/containers/storage/overlay/1340fcf2daa1e327947605b470897349a855fc3e62c7ea2942e85bc8a65d94a9/merged major:0 minor:722 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/489597874d7aa02e0b8ef6ee37cbb150fbb5a8c7e8a10e99d529622438dab8e0/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/23d246f897cc69b23ffa0e5f14700d59abcbf10c6cccf009562c54d16f8877ac/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/f498186840a66918d700d99d105f518c405fd7db00c6e01459ae1ca5695dfdd1/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-760:{mountpoint:/var/lib/containers/storage/overlay/297f365c67787c561ef920de625958e66fa07a71950a0e0bfeab87d54c7e1723/merged major:0 minor:760 fsType:overlay blockSize:0} overlay_0-762:{mountpoint:/var/lib/containers/storage/overlay/e6a6f333c09d1334b9730547246362dcb0c6a33d77142c2e199e30a2faf31276/merged major:0 minor:762 fsType:overlay blockSize:0} overlay_0-778:{mountpoint:/var/lib/containers/storage/overlay/86dfc8b85d207e2007e1bf8fb00ca093c070dd6830b1445b2123309cfdc69af8/merged major:0 minor:778 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/9603b939bc6d5eaba23591d0f2364dd3ef2524583bd0020382059803c4df7575/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/0a4862a365bf4b516171426d7be23709fa7763e4641ea9d840b3ef24b88969b7/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/9ae4511b8553ea2b4fe4d45a7b9fe5b662f973337cf0b862e4e06c652d5e1934/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/ac5c2fda673c59bbf704269046099703690d39096ddf04d7dad59b252c728ffa/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-805:{mountpoint:/var/lib/containers/storage/overlay/385467f4e1d7c8bdceb0a59d3a8508874598044f5a99fd5f586ebb943f8fcef3/merged major:0 minor:805 fsType:overlay blockSize:0} overlay_0-813:{mountpoint:/var/lib/containers/storage/overlay/ccccad251523a5c62c8bfd51d084fa5e0dee6a715b41e614ad0b21c617506251/merged major:0 minor:813 fsType:overlay blockSize:0} overlay_0-815:{mountpoint:/var/lib/containers/storage/overlay/679ead2c9195635d5bf8f503cbf9c360de06971682fdf6b75f1135f958be751c/merged major:0 minor:815 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/8af6b8d5ba9ccaa4db54ef59de122c31aa041d2fe6f9808c77c629b1c007734f/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/0b5fccd5ed0ba038f0ed264af8c8f6ca04fe137dff96f4eb6995b1bd078a3047/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-843:{mountpoint:/var/lib/containers/storage/overlay/27731a9879283243b192bed60c8fa81dfeec8d9d5a9f82785d0b78f22a4f5a91/merged major:0 minor:843 fsType:overlay blockSize:0} overlay_0-857:{mountpoint:/var/lib/containers/storage/overlay/c1f0fdce919329f6cfc28d96292fce9637e07a490ba7c62c9145f68e2d1beddd/merged major:0 minor:857 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/333895d0e9aea0dcdf83abf1d4c775fbb624132094cfe870a91c6452c6394ccd/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-868:{mountpoint:/var/lib/containers/storage/overlay/cbc2d0307f89ec824cf62a6561942c57344fe125044b7dbe7a061637a3029cdb/merged major:0 minor:868 fsType:overlay blockSize:0} overlay_0-870:{mountpoint:/var/lib/containers/storage/overlay/f31784991d338ada119403b2d257a985c06d13bc35941c473c11e56148b1ce0a/merged major:0 minor:870 fsType:overlay blockSize:0} overlay_0-876:{mountpoint:/var/lib/containers/storage/overlay/c9b5b62e89a9cab658dc7b9479e3b9a7e3c7bd126077c66d6eef919fdde9db74/merged major:0 minor:876 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/e29271d1e29addd92c66c03c233063d502e2e48e85f0d9f9bfa2b578a2f35ac2/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/89ff4718ce9150320605c84b7fdabcf39fcdefb56c1bac799333a9b9a8a867fd/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/1c10495e56a1e50ae4d11d68325e7ef06b5bd99c557692ef252e9ec9ed4ceb4b/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/a54c1f23ca93de37447490028724e919b63380ebccf6ed55c4b04eca486142b4/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-885:{mountpoint:/var/lib/containers/storage/overlay/62e32875cedfec0e46bd2c8c6c6835ecc969dfa67f1d08880cffb04914b82b42/merged major:0 minor:885 fsType:overlay blockSize:0} overlay_0-892:{mountpoint:/var/lib/containers/storage/overlay/728bdecfe255d56441f9c4970502aaf1b7b7ed5df87c62ed885d689ee64c0d7c/merged major:0 minor:892 fsType:overlay blockSize:0} overlay_0-894:{mountpoint:/var/lib/containers/storage/overlay/07d11d8fb526013b28de3a2ee3191e1443e2e0cfa4d83bfa4347073de50f8d80/merged major:0 minor:894 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/e11e70f1176dbf6979a81dc68ad3b9147866bfc065eca6e1ecbd7163fffea4af/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-914:{mountpoint:/var/lib/containers/storage/overlay/eefa1333d30ea95a3f20227fc463513fd9cc8cbc7a7f506419b050e12fca4eff/merged major:0 minor:914 fsType:overlay blockSize:0} overlay_0-920:{mountpoint:/var/lib/containers/storage/overlay/439cd53d221a7a0d90ea33d8e46443f4b00861b989be16ad96454d04a634aaea/merged major:0 minor:920 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/153ff100303fac5416ce113451c86ad4f8c96b0a5a2c4ea4de144a0043fb3a61/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-939:{mountpoint:/var/lib/containers/storage/overlay/7cdbaea44fefeeb1c9bd25081fae30a4342de3d3809c504696d6b1418dd233df/merged major:0 minor:939 fsType:overlay blockSize:0} overlay_0-940:{mountpoint:/var/lib/containers/storage/overlay/e248c69267163d104573bbe117aa3e2472e8e0299f8ca65ad793c448b71b81c6/merged major:0 minor:940 fsType:overlay blockSize:0} overlay_0-942:{mountpoint:/var/lib/containers/storage/overlay/fd5ebab255265efebcc8f43970329d6c797f0f0bf6c1248212c3827c96a34e8f/merged major:0 minor:942 fsType:overlay blockSize:0} overlay_0-944:{mountpoint:/var/lib/containers/storage/overlay/e628e34edd3ca9d976f20a3e13521365fc8d8e7f571722d4c64a49f5e7f8a50c/merged major:0 minor:944 fsType:overlay blockSize:0} overlay_0-950:{mountpoint:/var/lib/containers/storage/overlay/685e47997b701868be069f4a17f93092de87911a85d3191b098717678ff5a8c8/merged major:0 minor:950 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/6fbbaba6cbe13b574218f5907f0f9f98268fc81fb9ed093fb7c796496ddff537/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-957:{mountpoint:/var/lib/containers/storage/overlay/15da836543be6caa1011be880ceb97371f7bf72c618623fa9593c6eb1c709f00/merged major:0 minor:957 fsType:overlay blockSize:0} overlay_0-961:{mountpoint:/var/lib/containers/storage/overlay/f53185b109dbb4c23898de8fda2a6590482ffb8cc9a7a2a6d345dfe7ba907111/merged major:0 minor:961 fsType:ov Mar 19 09:33:35.176711 master-0 kubenswrapper[27819]: erlay blockSize:0} overlay_0-964:{mountpoint:/var/lib/containers/storage/overlay/79509a90d5110177fce5b4b163bdf4665953fc56f3b4f7d43cd1e17d78db0404/merged major:0 minor:964 fsType:overlay blockSize:0} overlay_0-966:{mountpoint:/var/lib/containers/storage/overlay/0703728053bc62351fed7c3cc239b3dcbf74610b6fdcfb47975596309e3c8374/merged major:0 minor:966 fsType:overlay blockSize:0} overlay_0-972:{mountpoint:/var/lib/containers/storage/overlay/7a415d186b07c74dd20fe13043e84ff1b5a3f7dab475fd265e5a3fad7571aeba/merged major:0 minor:972 fsType:overlay blockSize:0} overlay_0-974:{mountpoint:/var/lib/containers/storage/overlay/d3f3435ec3cc393e38e16fee95cbe1b4aaf99d7a617e74aa75e28ad8554bdedd/merged major:0 minor:974 fsType:overlay blockSize:0} overlay_0-976:{mountpoint:/var/lib/containers/storage/overlay/f0b30a5ada9c0ec64dadb538668e6965aa676d6087e4bd675e0521439d1518e6/merged major:0 minor:976 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/6e903d7877e982dcc069f2016e9b1b5aabfc2fce28d81b4af95aaa4bcbdaf9d5/merged major:0 minor:982 fsType:overlay blockSize:0} overlay_0-994:{mountpoint:/var/lib/containers/storage/overlay/3bafa0a1d9068a19f9c765858498b681f1264409ccb977423cc7035ac0a05db2/merged major:0 minor:994 fsType:overlay blockSize:0}] Mar 19 09:33:35.208426 master-0 kubenswrapper[27819]: I0319 09:33:35.207060 27819 manager.go:217] Machine: {Timestamp:2026-03-19 09:33:35.20611817 +0000 UTC m=+0.127695882 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:3e104eb08e5948b08517e4448d4a842b SystemUUID:3e104eb0-8e59-48b0-8517-e4448d4a842b BootID:5d651922-4f48-42db-81f8-e0fd55710ee7 Filesystems:[{Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:672 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1060 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1118 DeviceMajor:0 DeviceMinor:1118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1109 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-620 DeviceMajor:0 DeviceMinor:620 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:911 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5133c097ddac4c4eb3bf47ec178286cfda103ff21a8e794c8ccd120974cf84fe/userdata/shm DeviceMajor:0 DeviceMinor:322 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ded5da9a-1447-46df-a8ff-ffd469562599/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:554 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~projected/kube-api-access-gmqts DeviceMajor:0 DeviceMinor:921 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1146 DeviceMajor:0 DeviceMinor:1146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a73a5b0-478f-496d-8b0c-9e3daf39c082/volumes/kubernetes.io~projected/kube-api-access-qtj5f DeviceMajor:0 DeviceMinor:1194 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-894 DeviceMajor:0 DeviceMinor:894 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e0491730-604c-4a66-b827-458da88d262b/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:842 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-693 DeviceMajor:0 DeviceMinor:693 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~projected/kube-api-access-gtjps DeviceMajor:0 DeviceMinor:836 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0fa00daf2556f9e828b5fbe69aad8b754ab0adc35064e5863d606b4e86280d65/userdata/shm DeviceMajor:0 DeviceMinor:98 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:511 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/227a0b9baae07c0c4d734e64a0b0160569405208b0b5bc4e93e6fc2a2a7e7eb6/userdata/shm DeviceMajor:0 DeviceMinor:230 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-266 DeviceMajor:0 DeviceMinor:266 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-402 DeviceMajor:0 DeviceMinor:402 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a920827df943f06d02da8e8ea819eda5fb31c3dfefaa7f8b86842839ee17dd17/userdata/shm DeviceMajor:0 DeviceMinor:953 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-940 DeviceMajor:0 DeviceMinor:940 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3cd6d09fe73a460b498f00d76bd556cdb55771a774477420bab191c7dcd68863/userdata/shm DeviceMajor:0 DeviceMinor:1075 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-701 DeviceMajor:0 DeviceMinor:701 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/033f63b4380b8e8b86a2ab76d4dc1a8c7396bde9f2a0aae0a46e053c5f07e8f1/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/56365780-b87d-43fc-95f5-8a44166aecf8/volumes/kubernetes.io~projected/kube-api-access-5rzx9 DeviceMajor:0 DeviceMinor:552 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/819b5de997e19e19a9d977e809d0fb3fdd9648622a344dd4ddd33e56129c529f/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~projected/kube-api-access-flln7 DeviceMajor:0 DeviceMinor:845 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7bdc639c2478b5c195d66a7791ae65075a49456c359aa49e7fc420db2f85021a/userdata/shm DeviceMajor:0 DeviceMinor:1195 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~projected/kube-api-access-r8bm4 DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c3610f08-aba1-411d-aa6d-811b88acdb7b/volumes/kubernetes.io~projected/kube-api-access-jdgvx DeviceMajor:0 DeviceMinor:822 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/60683578-6673-4aff-b1d5-3167d534ac08/volumes/kubernetes.io~projected/kube-api-access-zcmdk DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43fca1a4-4fa7-4a43-b9c4-7f50a8737643/volumes/kubernetes.io~projected/kube-api-access-mbktm DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/319cb3ca2c37415dc41e1160ebdc6c8cfc6a2108542dd10b877b244ac8b9e929/userdata/shm DeviceMajor:0 DeviceMinor:490 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-429 DeviceMajor:0 DeviceMinor:429 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d3fd276-2fe2-423a-b1ee-f27f1596d013/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:348 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598/userdata/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~projected/kube-api-access-nfmmt DeviceMajor:0 DeviceMinor:1070 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-697 DeviceMajor:0 DeviceMinor:697 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed4ce2b-080f-4523-8527-eee768e06123/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:807 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~projected/kube-api-access-tll8k DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d153f8589c77234f9dc34525d12bab7d6b406888e2e51c22abf001583537f5c4/userdata/shm DeviceMajor:0 DeviceMinor:460 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-499 DeviceMajor:0 DeviceMinor:499 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cef53432-93f5-4581-b3de-c8cc5cac2ecb/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:818 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6937b999e172420380651c53fc5e6680d5943c027cccaefd6221f5dee41afb2c/userdata/shm DeviceMajor:0 DeviceMinor:1080 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5587303dfbff2e0f6e8f88f34bf2533361126f22ec3322ef362bf2e083f2b5d9/userdata/shm DeviceMajor:0 DeviceMinor:826 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-966 DeviceMajor:0 DeviceMinor:966 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1017 DeviceMajor:0 DeviceMinor:1017 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8ceab37591fbebe145d89befc6bda128dba3935ebb7ed63b53f71a4c6187794d/userdata/shm DeviceMajor:0 DeviceMinor:292 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1078 DeviceMajor:0 DeviceMinor:1078 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba384c9cdc57f87a975d87b2de9f0cfa5598c8a35123c7bc925dcebbf60a5093/userdata/shm DeviceMajor:0 DeviceMinor:740 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-687 DeviceMajor:0 DeviceMinor:687 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1050 DeviceMajor:0 DeviceMinor:1050 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~projected/kube-api-access-x2hfh DeviceMajor:0 DeviceMinor:274 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-395 DeviceMajor:0 DeviceMinor:395 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1199 DeviceMajor:0 DeviceMinor:1199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79fb6ce279b79d74bcf11031fa64e39867565987e212e753cafb1ffc3c809037/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-565 DeviceMajor:0 DeviceMinor:565 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-914 DeviceMajor:0 DeviceMinor:914 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~projected/kube-api-access-dt99t DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:676 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/46c7cde3-2cb4-4fa8-94ca-d5feff877da9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1207 DeviceMajor:0 DeviceMinor:1207 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-720 DeviceMajor:0 DeviceMinor:720 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eb3a8fcff4f5b0d2ffe195a7a3bcd28a1a9853e3da407d7227450ac49c662071/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~projected/kube-api-access-smvtc DeviceMajor:0 DeviceMinor:103 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:592 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:674 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4a73a5b0-478f-496d-8b0c-9e3daf39c082/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1193 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-892 DeviceMajor:0 DeviceMinor:892 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8cb029a2424e510cd558a569b6abe1c9bc15c4884423b14519ec85b57a58f6a2/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-382 DeviceMajor:0 DeviceMinor:382 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:440 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cef53432-93f5-4581-b3de-c8cc5cac2ecb/volumes/kubernetes.io~projected/kube-api-access-sm9vh DeviceMajor:0 DeviceMinor:823 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/611f0577f694d16ae6cfdfa887a45e57816d4fedaa4b7733f18258fff60747d7/userdata/shm DeviceMajor:0 DeviceMinor:934 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1091 DeviceMajor:0 DeviceMinor:1091 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~projected/kube-api-access-wpcnv DeviceMajor:0 DeviceMinor:140 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/58fbf09a-3a26-45ab-8496-11d05c27e9cf/volumes/kubernetes.io~projected/kube-api-access-4xjhk DeviceMajor:0 DeviceMinor:275 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fed75514-8f48-40b7-9fed-0afd6042cfbf/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:458 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:513 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ded5da9a-1447-46df-a8ff-ffd469562599/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:555 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1065 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-815 DeviceMajor:0 DeviceMinor:815 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/volumes/kubernetes.io~projected/kube-api-access-s9tpx DeviceMajor:0 DeviceMinor:837 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/172085267d003a11af66385fae45641af5f2ea573dfe38357436fa95e4bfc2cb/userdata/shm DeviceMajor:0 DeviceMinor:1032 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:812 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/703bc73d8896572892810aca25e1497f5e98093e90b265dabb39322f65959059/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:673 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-843 DeviceMajor:0 DeviceMinor:843 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-520 DeviceMajor:0 DeviceMinor:520 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-683 DeviceMajor:0 DeviceMinor:683 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e8a7e077-3f6c-4efb-9865-cf82480c5da1/volumes/kubernetes.io~projected/kube-api-access-mncvz DeviceMajor:0 DeviceMinor:789 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b42aee2f-bffc-4c43-bf20-16d9c67d216c/volumes/kubernetes.io~projected/kube-api-access-lbvbr DeviceMajor:0 DeviceMinor:841 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/093a8e850a736d3eca3797467a6bc2ecea1fef6e909d2da61102bdda8dc94887/userdata/shm DeviceMajor:0 DeviceMinor:827 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-471 DeviceMajor:0 DeviceMinor:471 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/09cc190d-5647-40a1-bfe9-5355bcb33b10/volumes/kubernetes.io~projected/kube-api-access-4w5fk DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-535 DeviceMajor:0 DeviceMinor:535 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec43fc3d3a5ac191c7efb625569a2dc8960d02c6765df5d0352ccc2d0da0a0a4/userdata/shm DeviceMajor:0 DeviceMinor:333 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95f6a209ef68dab4cb5672857aeba51bebac9f6d112d21c7fcd718cb5be803c7/userdata/shm DeviceMajor:0 DeviceMinor:847 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1073 DeviceMajor:0 DeviceMinor:1073 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/53bff8e4-bf60-4386-8905-49d43fd6c420/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f9982a7fe2276ecf5bf8dd3bab737e593501425df536f9820a4bd04690b29d97/userdata/shm DeviceMajor:0 DeviceMinor:959 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-638 DeviceMajor:0 DeviceMinor:638 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-870 DeviceMajor:0 DeviceMinor:870 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:494 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-618 DeviceMajor:0 DeviceMinor:618 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fc42f33929f2a6b9103f7b23ae3ef7d3e614662550ded98a184c1328a4069b14/userdata/shm DeviceMajor:0 DeviceMinor:794 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3014cb772787d6c5ed5213751efdfc2f600b71700a9642b8657868066aed7a56/userdata/shm DeviceMajor:0 DeviceMinor:796 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0adaea87-67d0-41a7-a1f3-855fdd483aca/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:835 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-441 DeviceMajor:0 DeviceMinor:441 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ca2f7cb3-8812-4fe3-83a5-61668ef87f99/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:571 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-961 DeviceMajor:0 DeviceMinor:961 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1107 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d66c30b6-67ad-4864-8b51-0424d462ac98/volumes/kubernetes.io~projected/kube-api-access-hccqk DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-685 DeviceMajor:0 DeviceMinor:685 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~projected/kube-api-access-47plx DeviceMajor:0 DeviceMinor:278 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/672ad0aa-a0c5-4640-840d-3ffa02c55d62/volumes/kubernetes.io~projected/kube-api-access-t58zw DeviceMajor:0 DeviceMinor:295 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-497 DeviceMajor:0 DeviceMinor:497 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~projected/kube-api-access-npg9k DeviceMajor:0 DeviceMinor:583 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d06b72c6f0371c1b0257ad61f4ae8d069961f5af58fd20925966cfc79d79903d/userdata/shm DeviceMajor:0 DeviceMinor:908 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1186 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-942 DeviceMajor:0 DeviceMinor:942 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3bfd630f9fdf5b8b85f98f54adb8a0d11b734768f05534837d7eafe24eba9814/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a591384f-f83e-4f65-b5d0-d519f05edbd9/volumes/kubernetes.io~projected/kube-api-access-vbmx9 DeviceMajor:0 DeviceMinor:560 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:833 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c2c1fb4aec553af65176f49e937958c69c931605beee69d28364ee9ba795514f/userdata/shm DeviceMajor:0 DeviceMinor:856 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4e41041845987412c5331ff6cc2618d3c5ae42cf3d9f83fd7b71a693c8e76498/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-510 DeviceMajor:0 DeviceMinor:510 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1185 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:448 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:102 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b99a9e40477692f9f0735d27cce4c13db8b181a07746d8c9e160e5b7831c820/userdata/shm DeviceMajor:0 DeviceMinor:236 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:489 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-663 DeviceMajor:0 DeviceMinor:663 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79e902522cf9e089c0a0493aeac487bed34c920c85cbed922e6fdff4d7dc7fa4/userdata/shm DeviceMajor:0 DeviceMinor:523 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~projected/kube-api-access-bnxk9 DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-396 DeviceMajor:0 DeviceMinor:396 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/14438c84-72d3-4f45-88a4-fc7e80df5fb8/volumes/kubernetes.io~projected/kube-api-access-dfdkb DeviceMajor:0 DeviceMinor:811 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/58fb20f0efe35396beaa43bc3d7cc4b5db2f0e64b1edfa9263cafc7641e2c772/userdata/shm DeviceMajor:0 DeviceMinor:1189 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1077 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90b6bf31b6285b89ba457dc317b7de2db8799afd4d2c378edeab172c14801f77/userdata/shm DeviceMajor:0 DeviceMinor:351 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-405 DeviceMajor:0 DeviceMinor:405 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45523224-f530-4354-90de-7fd65a1a3911/volumes/kubernetes.io~projected/kube-api-access-8l8cg DeviceMajor:0 DeviceMinor:267 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cd1425b9-fcd1-4aba-899f-e110eebce626/volumes/kubernetes.io~projected/kube-api-access-s2vbp DeviceMajor:0 DeviceMinor:821 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/408fc587f3c1d995e472d57ef08e1448783433be2d773a5e80c2f22fddf79bea/userdata/shm DeviceMajor:0 DeviceMinor:561 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1021 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-616 DeviceMajor:0 DeviceMinor:616 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:452 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/71d6ffeaf51e521880d0d21a5fc9c90428957a1500a387a07fba1ffc0e879334/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMinor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/volumes/kubernetes.io~projected/kube-api-access-7g2ng DeviceMajor:0 DeviceMinor:910 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-665 DeviceMajor:0 DeviceMinor:665 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d3fd276-2fe2-423a-b1ee-f27f1596d013/volumes/kubernetes.io~projected/kube-api-access-cqc86 DeviceMajor:0 DeviceMinor:349 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~projected/kube-api-access-cjnjq DeviceMajor:0 DeviceMinor:261 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-464 DeviceMajor:0 DeviceMinor:464 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e/volumes/kubernetes.io~projected/kube-api-access-rp5rd DeviceMajor:0 DeviceMinor:803 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3883b232-5772-460f-9e94-b4cbc7b7e638/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1067 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257/userdata/shm DeviceMajor:0 DeviceMinor:66 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-251 DeviceMajor:0 DeviceMinor:251 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d6cd2eac-6412-4f38-8272-743c67b218a3/volumes/kubernetes.io~projected/kube-api-access-x4n26 DeviceMajor:0 DeviceMinor:257 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1116 DeviceMajor:0 DeviceMinor:1116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~projected/kube-api-access-qvnp7 DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:839 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ea94bf8965f915667b084d40efeb4f5102c63b750c132e105898d2d86dfc6bcf/userdata/shm DeviceMajor:0 DeviceMinor:852 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~projected/kube-api-access-m8b7s DeviceMajor:0 DeviceMinor:298 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-377 DeviceMajor:0 DeviceMinor:377 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b36bbd0455724f4c84a788594d831cdec4b648d0e41f4b0f6e9ae8e3b529de5/userdata/shm DeviceMajor:0 DeviceMinor:341 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-760 DeviceMajor:0 DeviceMinor:760 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-964 DeviceMajor:0 DeviceMinor:964 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-537 DeviceMajor:0 DeviceMinor:537 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-427 DeviceMajor:0 DeviceMinor:427 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-876 DeviceMajor:0 DeviceMinor:876 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~projected/kube-api-access-ssdjz DeviceMajor:0 DeviceMinor:1069 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/757166b43c0c56e8283c67b367d970d37bc2cba347814ca1a8d85ab635b22caa/userdata/shm DeviceMajor:0 DeviceMinor:955 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/42427cdb4004876179dcfbd8f19dca1e35b1708032ece70b1b2417c09bcc6b09/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1119 DeviceMajor:0 DeviceMinor:1119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~projected/kube-api-access-zbw6q DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/204782aa21e2bf31865a1381946590d0ce8a970fb26f83eebd02fa7b0497c2c5/userdata/shm DeviceMajor:0 DeviceMinor:595 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aeeb874811e84346db41fb4fb7b6cad106590322b692edfbf0b6c383addea6a6/userdata/shm DeviceMajor:0 DeviceMinor:332 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-868 DeviceMajor:0 DeviceMinor:868 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1187ddcd-3b78-4b3f-9b12-06ce76cb6040/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~projected/kube-api-access-svz6j DeviceMajor:0 DeviceMinor:488 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f7c28b40cde4a7aad725d4c7e6669cdabc0febc1e8bf8d8daea1b94e0e12e828/userdata/shm DeviceMajor:0 DeviceMinor:809 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8ee3b1585121acd28cac002efd25a4951438f7aba1490780501fdecb04a7dd12/userdata/shm DeviceMajor:0 DeviceMinor:1071 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-567 DeviceMajor:0 DeviceMinor:567 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba9f914f103017d6ef2cf2c16d508f5302ad218dbd57c88fe26f6d74473e9036/userdata/shm DeviceMajor:0 DeviceMinor:838 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-920 DeviceMajor:0 DeviceMinor:920 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~projected/kube-api-access-t6t27 DeviceMajor:0 DeviceMinor:594 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da2e551f19738e875d8b4b505223588d9ea94eb7716af7e0ff449212c8514bb4/userdata/shm DeviceMajor:0 DeviceMinor:792 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-974 DeviceMajor:0 DeviceMinor:974 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~projected/kube-api-access-p4jnj DeviceMajor:0 DeviceMinor:1110 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1205 DeviceMajor:0 DeviceMinor:1205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-365 DeviceMajor:0 DeviceMinor:365 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~projected/kube-api-access-tr4bl DeviceMajor:0 DeviceMinor:313 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8928fc78a20804bb52860e947962b354cf91d1529b5deb719ab35788e3ef8791/userdata/shm DeviceMajor:0 DeviceMinor:344 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d504cbc7-5c09-4712-9f7a-c41a6386ef79/volumes/kubernetes.io~projected/kube-api-access-tmwbr DeviceMajor:0 DeviceMinor:784 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1012 DeviceMajor:0 DeviceMinor:1012 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-857 DeviceMajor:0 DeviceMinor:857 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-695 DeviceMajor:0 DeviceMinor:695 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:832 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-778 DeviceMajor:0 DeviceMinor:778 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/volumes/kubernetes.io~projected/kube-api-access-2svkc DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/volumes/kubernetes.io~projected/kube-api-access-k5hmg DeviceMajor:0 DeviceMinor:360 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3a07456d-2e8e-4e80-a777-d0903ad21f07/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:455 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-722 DeviceMajor:0 DeviceMinor:722 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~projected/kube-api-access-lgrjz DeviceMajor:0 DeviceMinor:1188 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/3eeb72c3-1a56-4955-845e-81607513b1b2/volumes/kubernetes.io~projected/kube-api-access-jns5r DeviceMajor:0 DeviceMinor:350 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1128 DeviceMajor:0 DeviceMinor:1128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcc18f9-66cf-45d9-965d-d0a57fcf285c/volumes/kubernetes.io~projected/kube-api-access-4tfnn DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-805 DeviceMajor:0 DeviceMinor:805 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:593 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-417 DeviceMajor:0 DeviceMinor:417 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3a0665e823da7bfc0df78c1979cfd4c3ca72731bad4e79e2c131fc1c4139e66f/userdata/shm DeviceMajor:0 DeviceMinor:829 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:867 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-972 DeviceMajor:0 DeviceMinor:972 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-575 DeviceMajor:0 DeviceMinor:575 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3c61e204454e38428fa04296fdaa0b86068d8df14b3972facff7186f87934a5b/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31e46a34-8a00-4bb3-869b-8a5911ef6cf8/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1066 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fe1881fb-c670-442a-a092-c1eee6b7d5e5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/ku Mar 19 09:33:35.208964 master-0 kubenswrapper[27819]: belet/pods/676f4062-ea34-48d0-80d7-3cd3d9da341e/volumes/kubernetes.io~projected/kube-api-access-h925l DeviceMajor:0 DeviceMinor:265 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16ae0be12cb0948b576a88de76c552bf6bb4908608f91f6bc384118d39093798/userdata/shm DeviceMajor:0 DeviceMinor:798 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c3610f08-aba1-411d-aa6d-811b88acdb7b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:820 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~projected/kube-api-access-2w48g DeviceMajor:0 DeviceMinor:1031 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1669b77c-4bef-42d5-ad0b-63c12a6677b2/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:487 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b6eced12019f1a054184dc214ff7951a270b910027060a2b561a895337a163e/userdata/shm DeviceMajor:0 DeviceMinor:339 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-813 DeviceMajor:0 DeviceMinor:813 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4/userdata/shm DeviceMajor:0 DeviceMinor:1111 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/55440bf9-0881-4823-af64-5652c2ad89ff/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:834 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1052 DeviceMajor:0 DeviceMinor:1052 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-762 DeviceMajor:0 DeviceMinor:762 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/67e5534b-f428-45cf-b54e-d06b25dc3e09/volumes/kubernetes.io~projected/kube-api-access-s45nc DeviceMajor:0 DeviceMinor:887 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/05c047f1dd1f77466b4da70d7d89474989156a4dc7f05fb84cbb6a93b60f00f0/userdata/shm DeviceMajor:0 DeviceMinor:851 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-944 DeviceMajor:0 DeviceMinor:944 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-597 DeviceMajor:0 DeviceMinor:597 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5/volumes/kubernetes.io~projected/kube-api-access-g8p7b DeviceMajor:0 DeviceMinor:791 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~projected/kube-api-access-mxz2j DeviceMajor:0 DeviceMinor:1068 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:802 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b7b8f971d1b9c8b6f3d7b9515f5fd45062c7ae583953b16c0868b2e9161722d/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/kube-api-access-rbzvl DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-361 DeviceMajor:0 DeviceMinor:361 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1095 DeviceMajor:0 DeviceMinor:1095 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5d9fbaf-ba14-4d2b-8376-1634eabbc782/volumes/kubernetes.io~projected/kube-api-access-rrmjf DeviceMajor:0 DeviceMinor:453 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bff5aeea-f859-4e38-bf1c-9e730025c212/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:443 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/211d123b-829c-49dd-b119-e172cab607cf/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:468 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-885 DeviceMajor:0 DeviceMinor:885 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/volumes/kubernetes.io~projected/kube-api-access-7thvr DeviceMajor:0 DeviceMinor:264 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1180 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4/volumes/kubernetes.io~projected/kube-api-access-jrdvd DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1019 DeviceMajor:0 DeviceMinor:1019 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1b230b9d-529c-4b28-bc73-659a28d7961a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1064 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a67ae8dc-240d-4708-9139-1d49c601e552/volumes/kubernetes.io~projected/kube-api-access-c654s DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1093 DeviceMajor:0 DeviceMinor:1093 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-526 DeviceMajor:0 DeviceMinor:526 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1504c38858cfd6dba74a1e8e13c6787eab9fb680b233330961a4b98abfa59449/userdata/shm DeviceMajor:0 DeviceMinor:306 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ba0c50971e9f4b73d6981687bf5599b2b14e3a056e01cd696dec3ae2bc23ec5/userdata/shm DeviceMajor:0 DeviceMinor:528 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-563 DeviceMajor:0 DeviceMinor:563 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ed4ce2b-080f-4523-8527-eee768e06123/volumes/kubernetes.io~projected/kube-api-access-nql4h DeviceMajor:0 DeviceMinor:810 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7432a082c2253d23b865426cbd0b7c6fc641fd734bb3b6088975045dd1832638/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eab66404c12034ae89f04e45ade44912e55d6fddf5edcf6fc585e549c9b0d555/userdata/shm DeviceMajor:0 DeviceMinor:240 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dde1a2d9-a43e-4b26-82d7-e0f83577468f/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:582 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f0094116ac72664d552811b0abcde688bb0d625fbe1bc8a48307ec88ea248337/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/561b7381-8439-4ccc-ac50-d7a50aeb0c55/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:588 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6cc45721-c05b-4161-91d9-d65cf6ec61d4/volumes/kubernetes.io~projected/kube-api-access-k6t9w DeviceMajor:0 DeviceMinor:321 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/56365780-b87d-43fc-95f5-8a44166aecf8/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:599 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c222998f-6211-4466-8ad7-5d9fcfb10789/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:671 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5aefb6138adeb7d46c141d72648e74fb238235b8d8af02bde5beca7c384d92e7/userdata/shm DeviceMajor:0 DeviceMinor:357 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-994 DeviceMajor:0 DeviceMinor:994 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/280c1ab0d20d5f0a1fc3fe957fae99e999c792256be0729f4bd66bf08519c5bf/userdata/shm DeviceMajor:0 DeviceMinor:84 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/72756f50-c970-4ef6-b8ca-88e49f996a74/volumes/kubernetes.io~projected/kube-api-access-zxn9l DeviceMajor:0 DeviceMinor:790 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cd1425b9-fcd1-4aba-899f-e110eebce626/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:817 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-950 DeviceMajor:0 DeviceMinor:950 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/012cdc1d-ebc8-431e-9a52-9a39de95dd0d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-976 DeviceMajor:0 DeviceMinor:976 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e924b0646dc2650e31e1b4cadf6eac6293c32b11a283f47d90fa34c50c73d4f0/userdata/shm DeviceMajor:0 DeviceMinor:243 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/17e0cb4a-e776-4886-927e-ae446af7f234/volumes/kubernetes.io~projected/kube-api-access-85vjd DeviceMajor:0 DeviceMinor:283 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d58c6b38-ef11-465c-9fee-b83b84ce4669/volumes/kubernetes.io~projected/kube-api-access-bs6m8 DeviceMajor:0 DeviceMinor:450 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a57648b5-1a08-49a7-bedb-f7c1e54d92b4/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:514 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2305598daf56c5c1600160f739031c4731c0af7f38255994d1bd85834e8628b0/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-586 DeviceMajor:0 DeviceMinor:586 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cee650f463641d78c2e399a131e5c5cb6dd2c4bd205c9ebc6a4a1814777051c4/userdata/shm DeviceMajor:0 DeviceMinor:600 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1187 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfdc2c2398d469d4bfd88f77bd233e682dfa44d723fa2659a746468a66c31467/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1197 DeviceMajor:0 DeviceMinor:1197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-398 DeviceMajor:0 DeviceMinor:398 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-719 DeviceMajor:0 DeviceMinor:719 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-469 DeviceMajor:0 DeviceMinor:469 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-643 DeviceMajor:0 DeviceMinor:643 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57227a66-c758-4a46-a5e1-f603baa3f570/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:808 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3f81774a-22a4-4335-961b-04e53e0f3b5e/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1030 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/70e8c62b-97c3-4c0c-85d3-f660118831fd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-939 DeviceMajor:0 DeviceMinor:939 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fed75514-8f48-40b7-9fed-0afd6042cfbf/volumes/kubernetes.io~projected/kube-api-access-h9t7v DeviceMajor:0 DeviceMinor:459 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14438c84-72d3-4f45-88a4-fc7e80df5fb8/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:804 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/67e5534b-f428-45cf-b54e-d06b25dc3e09/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:884 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:260 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/70258988-8374-4aee-aaa2-be3c2e853062/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:316 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6814e0600083f0996ce4c3d6eefe5646615f1a2b02ab21e27a25e1eb855f75c6/userdata/shm DeviceMajor:0 DeviceMinor:317 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-404 DeviceMajor:0 DeviceMinor:404 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e25a16f3-dfe0-49c5-a31d-e310d369f406/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:449 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-957 DeviceMajor:0 DeviceMinor:957 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/525b41b5-82d8-4d47-8350-79644a2c9360/volumes/kubernetes.io~projected/kube-api-access-8s7rj DeviceMajor:0 DeviceMinor:271 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5d9fbaf-ba14-4d2b-8376-1634eabbc782/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:451 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:675 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-705 DeviceMajor:0 DeviceMinor:705 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33ca2f2b19a1770d26eec6f100c1e6f12e2c50ac6dbb0f1fd1d1831103d4af22/userdata/shm DeviceMajor:0 DeviceMinor:380 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ce62aa530e9de7b740f93aac76703fc3a80b1ed5e0bbed25b7228c7b762d272f/userdata/shm DeviceMajor:0 DeviceMinor:462 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d1ec5df20bed29547ffb1f52c2c4287cab5554fd187df0c227bb31c435fc62a0/userdata/shm DeviceMajor:0 DeviceMinor:495 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39c6768818fedea75d87ad8b7a8640832bffe77cbf3d443982b6c9295adc4865/userdata/shm DeviceMajor:0 DeviceMinor:641 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1106 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1191 DeviceMajor:0 DeviceMinor:1191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e413507c6f4a8e010e922bcd426014dd970b85408295730281ace1a504f9959/userdata/shm DeviceMajor:0 DeviceMinor:346 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-613 DeviceMajor:0 DeviceMinor:613 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-61 DeviceMajor:0 DeviceMinor:61 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-717 DeviceMajor:0 DeviceMinor:717 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-153 DeviceMajor:0 DeviceMinor:153 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/47da8964-3606-4181-87fb-8f04a3065295/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:141 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:093a8e850a736d3 MacAddress:0a:a6:bf:19:97:b5 Speed:10000 Mtu:8900} {Name:0e413507c6f4a8e MacAddress:6a:4a:71:c1:7c:57 Speed:10000 Mtu:8900} {Name:1504c38858cfd6d MacAddress:d6:49:8c:48:c6:19 Speed:10000 Mtu:8900} {Name:16ae0be12cb0948 MacAddress:16:7b:58:64:f3:13 Speed:10000 Mtu:8900} {Name:172085267d003a1 MacAddress:9a:3d:57:d4:84:87 Speed:10000 Mtu:8900} {Name:204782aa21e2bf3 MacAddress:f6:d3:70:c0:07:1b Speed:10000 Mtu:8900} {Name:227a0b9baae07c0 MacAddress:b2:69:c4:57:55:0a Speed:10000 Mtu:8900} {Name:2305598daf56c5c MacAddress:42:46:fa:f8:ac:79 Speed:10000 Mtu:8900} {Name:280c1ab0d20d5f0 MacAddress:36:bf:cf:3b:9a:02 Speed:10000 Mtu:8900} {Name:2b1a761121f2940 MacAddress:26:bb:0b:cb:b6:e2 Speed:10000 Mtu:8900} {Name:2b6eced12019f1a MacAddress:8a:35:f8:40:52:0d Speed:10000 Mtu:8900} {Name:2b99a9e40477692 MacAddress:b2:d3:b6:1f:cb:f8 Speed:10000 Mtu:8900} {Name:2ba0c50971e9f4b MacAddress:1e:d1:f9:9d:5a:7c Speed:10000 Mtu:8900} {Name:3014cb772787d6c MacAddress:2e:d5:cd:d9:1b:d2 Speed:10000 Mtu:8900} {Name:319cb3ca2c37415 MacAddress:f6:00:39:e6:67:75 Speed:10000 Mtu:8900} {Name:33ca2f2b19a1770 MacAddress:ca:27:52:9e:46:61 Speed:10000 Mtu:8900} {Name:3a0665e823da7bf MacAddress:5a:16:1f:28:77:c5 Speed:10000 Mtu:8900} {Name:3c61e204454e384 MacAddress:be:84:1f:9e:89:5a Speed:10000 Mtu:8900} {Name:3cd6d09fe73a460 MacAddress:62:76:21:84:e7:fd Speed:10000 Mtu:8900} {Name:42427cdb4004876 MacAddress:ea:25:3e:e8:1f:7f Speed:10000 Mtu:8900} {Name:4e4104184598741 MacAddress:26:4c:e8:58:56:2a Speed:10000 Mtu:8900} {Name:5133c097ddac4c4 MacAddress:86:8d:ff:f5:59:88 Speed:10000 Mtu:8900} {Name:5587303dfbff2e0 MacAddress:72:fa:4d:b9:f4:e7 Speed:10000 Mtu:8900} {Name:58fb20f0efe3539 MacAddress:5e:ab:47:f1:5b:64 Speed:10000 Mtu:8900} {Name:5aefb6138adeb7d MacAddress:fa:3f:84:aa:ef:f8 Speed:10000 Mtu:8900} {Name:611f0577f694d16 MacAddress:fe:da:ad:c7:b4:42 Speed:10000 Mtu:8900} {Name:6814e0600083f09 MacAddress:f2:44:a7:d8:85:ee Speed:10000 Mtu:8900} {Name:6937b999e172420 MacAddress:62:b8:95:01:05:ff Speed:10000 Mtu:8900} {Name:6b36bbd0455724f MacAddress:e6:6e:d0:a3:7d:e5 Speed:10000 Mtu:8900} {Name:703bc73d8896572 MacAddress:a2:1b:ab:f3:26:23 Speed:10000 Mtu:8900} {Name:79e902522cf9e08 MacAddress:d6:f3:d8:4b:cc:a2 Speed:10000 Mtu:8900} {Name:7bdc639c2478b5c MacAddress:76:3d:18:59:c3:fc Speed:10000 Mtu:8900} {Name:819b5de997e19e1 MacAddress:92:40:76:53:bc:4a Speed:10000 Mtu:8900} {Name:8928fc78a20804b MacAddress:4a:dd:fd:6f:79:ba Speed:10000 Mtu:8900} {Name:8cb029a2424e510 MacAddress:8a:ed:f8:a8:75:3f Speed:10000 Mtu:8900} {Name:8ceab37591fbebe MacAddress:52:d2:1b:85:a8:4e Speed:10000 Mtu:8900} {Name:90b6bf31b6285b8 MacAddress:b2:88:00:64:13:36 Speed:10000 Mtu:8900} {Name:95f6a209ef68dab MacAddress:d2:2f:60:66:14:c4 Speed:10000 Mtu:8900} {Name:a920827df943f06 MacAddress:4a:25:68:8e:54:b2 Speed:10000 Mtu:8900} {Name:aeeb874811e8434 MacAddress:d6:46:42:1f:f1:cd Speed:10000 Mtu:8900} {Name:ba384c9cdc57f87 MacAddress:e6:97:7a:7b:52:5d Speed:10000 Mtu:8900} {Name:ba9f914f103017d MacAddress:92:a5:ce:67:3f:c0 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:7a:f2:ce:15:99:40 Speed:0 Mtu:8900} {Name:c2c1fb4aec553af MacAddress:ae:71:80:c2:b5:e4 Speed:10000 Mtu:8900} {Name:ce62aa530e9de7b MacAddress:36:85:61:ae:c0:45 Speed:10000 Mtu:8900} {Name:cee650f463641d7 MacAddress:1a:98:35:93:90:02 Speed:10000 Mtu:8900} {Name:d06b72c6f0371c1 MacAddress:66:8e:aa:91:0d:76 Speed:10000 Mtu:8900} {Name:d153f8589c77234 MacAddress:aa:40:ca:02:76:3c Speed:10000 Mtu:8900} {Name:d1ec5df20bed295 MacAddress:5e:57:e6:82:43:1e Speed:10000 Mtu:8900} {Name:da2e551f19738e8 MacAddress:ce:dd:6e:c9:db:bb Speed:10000 Mtu:8900} {Name:e924b0646dc2650 MacAddress:c2:aa:70:67:ee:e9 Speed:10000 Mtu:8900} {Name:eab66404c12034a MacAddress:26:02:3d:99:49:74 Speed:10000 Mtu:8900} {Name:eb3a8fcff4f5b0d MacAddress:42:6d:98:30:87:11 Speed:10000 Mtu:8900} {Name:ec43fc3d3a5ac19 MacAddress:a6:b5:87:af:20:21 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:33:06:4c Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:34:d3:c4 Speed:-1 Mtu:9000} {Name:f0094116ac72664 MacAddress:0a:c5:28:af:d8:de Speed:10000 Mtu:8900} {Name:fc42f33929f2a6b MacAddress:7e:8f:0c:1d:1d:ea Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:7e:e2:1c:fe:3d:73 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 19 09:33:35.208964 master-0 kubenswrapper[27819]: I0319 09:33:35.208313 27819 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 19 09:33:35.208964 master-0 kubenswrapper[27819]: I0319 09:33:35.208390 27819 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 19 09:33:35.208964 master-0 kubenswrapper[27819]: I0319 09:33:35.208670 27819 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 19 09:33:35.209312 master-0 kubenswrapper[27819]: I0319 09:33:35.209211 27819 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 09:33:35.209494 master-0 kubenswrapper[27819]: I0319 09:33:35.209250 27819 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 09:33:35.209494 master-0 kubenswrapper[27819]: I0319 09:33:35.209492 27819 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 09:33:35.209623 master-0 kubenswrapper[27819]: I0319 09:33:35.209506 27819 container_manager_linux.go:303] "Creating device plugin manager" Mar 19 09:33:35.209623 master-0 kubenswrapper[27819]: I0319 09:33:35.209516 27819 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 09:33:35.209623 master-0 kubenswrapper[27819]: I0319 09:33:35.209557 27819 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 19 09:33:35.209808 master-0 kubenswrapper[27819]: I0319 09:33:35.209780 27819 state_mem.go:36] "Initialized new in-memory state store" Mar 19 09:33:35.209911 master-0 kubenswrapper[27819]: I0319 09:33:35.209888 27819 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 19 09:33:35.209975 master-0 kubenswrapper[27819]: I0319 09:33:35.209960 27819 kubelet.go:418] "Attempting to sync node with API server" Mar 19 09:33:35.210024 master-0 kubenswrapper[27819]: I0319 09:33:35.209977 27819 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 09:33:35.210024 master-0 kubenswrapper[27819]: I0319 09:33:35.209999 27819 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 19 09:33:35.210024 master-0 kubenswrapper[27819]: I0319 09:33:35.210014 27819 kubelet.go:324] "Adding apiserver pod source" Mar 19 09:33:35.210135 master-0 kubenswrapper[27819]: I0319 09:33:35.210034 27819 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 09:33:35.211202 master-0 kubenswrapper[27819]: I0319 09:33:35.211169 27819 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 19 09:33:35.211327 master-0 kubenswrapper[27819]: I0319 09:33:35.211306 27819 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211536 27819 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211762 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211781 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211788 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211794 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211800 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211806 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211816 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211823 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211831 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211839 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211859 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211870 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.211921 27819 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.212232 27819 server.go:1280] "Started kubelet" Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.212372 27819 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.212373 27819 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 09:33:35.212694 master-0 kubenswrapper[27819]: I0319 09:33:35.212449 27819 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 19 09:33:35.213468 master-0 kubenswrapper[27819]: I0319 09:33:35.212955 27819 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 09:33:35.213982 master-0 kubenswrapper[27819]: I0319 09:33:35.213931 27819 server.go:449] "Adding debug handlers to kubelet server" Mar 19 09:33:35.218105 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 19 09:33:35.240919 master-0 kubenswrapper[27819]: I0319 09:33:35.238251 27819 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 09:33:35.248239 master-0 kubenswrapper[27819]: I0319 09:33:35.248194 27819 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 19 09:33:35.248239 master-0 kubenswrapper[27819]: I0319 09:33:35.248236 27819 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 09:33:35.248459 master-0 kubenswrapper[27819]: I0319 09:33:35.248281 27819 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-20 09:09:07 +0000 UTC, rotation deadline is 2026-03-20 02:59:23.966690958 +0000 UTC Mar 19 09:33:35.248459 master-0 kubenswrapper[27819]: I0319 09:33:35.248322 27819 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h25m48.718371051s for next certificate rotation Mar 19 09:33:35.248577 master-0 kubenswrapper[27819]: I0319 09:33:35.248472 27819 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 19 09:33:35.248577 master-0 kubenswrapper[27819]: I0319 09:33:35.248492 27819 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 19 09:33:35.248703 master-0 kubenswrapper[27819]: I0319 09:33:35.248672 27819 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 19 09:33:35.249671 master-0 kubenswrapper[27819]: I0319 09:33:35.249011 27819 factory.go:55] Registering systemd factory Mar 19 09:33:35.249671 master-0 kubenswrapper[27819]: I0319 09:33:35.249086 27819 factory.go:221] Registration of the systemd container factory successfully Mar 19 09:33:35.249671 master-0 kubenswrapper[27819]: I0319 09:33:35.249563 27819 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 09:33:35.250021 master-0 kubenswrapper[27819]: I0319 09:33:35.249960 27819 factory.go:153] Registering CRI-O factory Mar 19 09:33:35.250021 master-0 kubenswrapper[27819]: I0319 09:33:35.249975 27819 factory.go:221] Registration of the crio container factory successfully Mar 19 09:33:35.250021 master-0 kubenswrapper[27819]: E0319 09:33:35.249977 27819 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 19 09:33:35.250151 master-0 kubenswrapper[27819]: I0319 09:33:35.250057 27819 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 19 09:33:35.250151 master-0 kubenswrapper[27819]: I0319 09:33:35.250080 27819 factory.go:103] Registering Raw factory Mar 19 09:33:35.250151 master-0 kubenswrapper[27819]: I0319 09:33:35.250098 27819 manager.go:1196] Started watching for new ooms in manager Mar 19 09:33:35.250289 master-0 kubenswrapper[27819]: I0319 09:33:35.250272 27819 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 09:33:35.250641 master-0 kubenswrapper[27819]: I0319 09:33:35.250533 27819 manager.go:319] Starting recovery of all containers Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265138 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-trusted-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265184 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45523224-f530-4354-90de-7fd65a1a3911" volumeName="kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265195 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53bff8e4-bf60-4386-8905-49d43fd6c420" volumeName="kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265205 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" volumeName="kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265214 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70258988-8374-4aee-aaa2-be3c2e853062" volumeName="kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265223 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3610f08-aba1-411d-aa6d-811b88acdb7b" volumeName="kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265231 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/projected/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-kube-api-access-lgrjz" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265241 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e46a34-8a00-4bb3-869b-8a5911ef6cf8" volumeName="kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265252 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265262 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57227a66-c758-4a46-a5e1-f603baa3f570" volumeName="kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265272 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3883b232-5772-460f-9e94-b4cbc7b7e638" volumeName="kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265282 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f81774a-22a4-4335-961b-04e53e0f3b5e" volumeName="kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265290 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265301 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265309 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" volumeName="kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265319 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" volumeName="kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265328 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d58c6b38-ef11-465c-9fee-b83b84ce4669" volumeName="kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265338 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265346 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8a7e077-3f6c-4efb-9865-cf82480c5da1" volumeName="kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-catalog-content" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265355 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" volumeName="kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265363 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6cd2eac-6412-4f38-8272-743c67b218a3" volumeName="kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265372 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6cd2eac-6412-4f38-8272-743c67b218a3" volumeName="kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265382 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="211d123b-829c-49dd-b119-e172cab607cf" volumeName="kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265390 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3883b232-5772-460f-9e94-b4cbc7b7e638" volumeName="kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265400 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53bff8e4-bf60-4386-8905-49d43fd6c420" volumeName="kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265408 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d3fd276-2fe2-423a-b1ee-f27f1596d013" volumeName="kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265418 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd1425b9-fcd1-4aba-899f-e110eebce626" volumeName="kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265429 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-serving-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265440 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265449 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265458 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4ce2b-080f-4523-8527-eee768e06123" volumeName="kubernetes.io/projected/6ed4ce2b-080f-4523-8527-eee768e06123-kube-api-access-nql4h" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265466 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d66c30b6-67ad-4864-8b51-0424d462ac98" volumeName="kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265476 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b230b9d-529c-4b28-bc73-659a28d7961a" volumeName="kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265484 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6cd2eac-6412-4f38-8272-743c67b218a3" volumeName="kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265491 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265500 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ded5da9a-1447-46df-a8ff-ffd469562599" volumeName="kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265509 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fed75514-8f48-40b7-9fed-0afd6042cfbf" volumeName="kubernetes.io/configmap/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-cabundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265517 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="672ad0aa-a0c5-4640-840d-3ffa02c55d62" volumeName="kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265525 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265534 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3883b232-5772-460f-9e94-b4cbc7b7e638" volumeName="kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265560 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265572 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" volumeName="kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265581 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58fbf09a-3a26-45ab-8496-11d05c27e9cf" volumeName="kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265590 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" volumeName="kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265599 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd1425b9-fcd1-4aba-899f-e110eebce626" volumeName="kubernetes.io/projected/cd1425b9-fcd1-4aba-899f-e110eebce626-kube-api-access-s2vbp" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265609 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265618 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/projected/1669b77c-4bef-42d5-ad0b-63c12a6677b2-kube-api-access-svz6j" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265626 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b230b9d-529c-4b28-bc73-659a28d7961a" volumeName="kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265635 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="561b7381-8439-4ccc-ac50-d7a50aeb0c55" volumeName="kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265643 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" volumeName="kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265652 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265663 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55440bf9-0881-4823-af64-5652c2ad89ff" volumeName="kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-apiservice-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265675 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17e0cb4a-e776-4886-927e-ae446af7f234" volumeName="kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265684 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d6cd2eac-6412-4f38-8272-743c67b218a3" volumeName="kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265695 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fed75514-8f48-40b7-9fed-0afd6042cfbf" volumeName="kubernetes.io/projected/fed75514-8f48-40b7-9fed-0afd6042cfbf-kube-api-access-h9t7v" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265707 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a73a5b0-478f-496d-8b0c-9e3daf39c082" volumeName="kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265720 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d504cbc7-5c09-4712-9f7a-c41a6386ef79" volumeName="kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-catalog-content" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265732 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676f4062-ea34-48d0-80d7-3cd3d9da341e" volumeName="kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265744 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265881 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c222998f-6211-4466-8ad7-5d9fcfb10789" volumeName="kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265896 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd1425b9-fcd1-4aba-899f-e110eebce626" volumeName="kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265907 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56365780-b87d-43fc-95f5-8a44166aecf8" volumeName="kubernetes.io/projected/56365780-b87d-43fc-95f5-8a44166aecf8-kube-api-access-5rzx9" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265919 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57227a66-c758-4a46-a5e1-f603baa3f570" volumeName="kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265931 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265941 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a591384f-f83e-4f65-b5d0-d519f05edbd9" volumeName="kubernetes.io/projected/a591384f-f83e-4f65-b5d0-d519f05edbd9-kube-api-access-vbmx9" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265949 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cc190d-5647-40a1-bfe9-5355bcb33b10" volumeName="kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265963 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265973 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265981 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c" volumeName="kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265990 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57227a66-c758-4a46-a5e1-f603baa3f570" volumeName="kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.265999 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67e5534b-f428-45cf-b54e-d06b25dc3e09" volumeName="kubernetes.io/projected/67e5534b-f428-45cf-b54e-d06b25dc3e09-kube-api-access-s45nc" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266007 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" volumeName="kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266015 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d504cbc7-5c09-4712-9f7a-c41a6386ef79" volumeName="kubernetes.io/projected/d504cbc7-5c09-4712-9f7a-c41a6386ef79-kube-api-access-tmwbr" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266024 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266033 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fca1a4-4fa7-4a43-b9c4-7f50a8737643" volumeName="kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266041 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45523224-f530-4354-90de-7fd65a1a3911" volumeName="kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266051 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266059 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="561b7381-8439-4ccc-ac50-d7a50aeb0c55" volumeName="kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266068 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58fbf09a-3a26-45ab-8496-11d05c27e9cf" volumeName="kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266080 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d66c30b6-67ad-4864-8b51-0424d462ac98" volumeName="kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266093 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="525b41b5-82d8-4d47-8350-79644a2c9360" volumeName="kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266105 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d66c30b6-67ad-4864-8b51-0424d462ac98" volumeName="kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266117 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dde1a2d9-a43e-4b26-82d7-e0f83577468f" volumeName="kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-tuned" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266131 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266143 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ded5da9a-1447-46df-a8ff-ffd469562599" volumeName="kubernetes.io/projected/ded5da9a-1447-46df-a8ff-ffd469562599-kube-api-access" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266153 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266163 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-image-import-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266172 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ed4ce2b-080f-4523-8527-eee768e06123" volumeName="kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266181 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266189 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-client" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266198 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17e0cb4a-e776-4886-927e-ae446af7f234" volumeName="kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266205 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0491730-604c-4a66-b827-458da88d262b" volumeName="kubernetes.io/projected/e0491730-604c-4a66-b827-458da88d262b-kube-api-access-gmqts" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266214 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266222 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266229 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e" volumeName="kubernetes.io/projected/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-kube-api-access-rp5rd" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266240 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" volumeName="kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266249 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c" volumeName="kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266257 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14438c84-72d3-4f45-88a4-fc7e80df5fb8" volumeName="kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266265 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266274 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d3fd276-2fe2-423a-b1ee-f27f1596d013" volumeName="kubernetes.io/projected/9d3fd276-2fe2-423a-b1ee-f27f1596d013-kube-api-access-cqc86" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266282 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17e0cb4a-e776-4886-927e-ae446af7f234" volumeName="kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266291 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4a73a5b0-478f-496d-8b0c-9e3daf39c082" volumeName="kubernetes.io/projected/4a73a5b0-478f-496d-8b0c-9e3daf39c082-kube-api-access-qtj5f" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266299 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="561b7381-8439-4ccc-ac50-d7a50aeb0c55" volumeName="kubernetes.io/projected/561b7381-8439-4ccc-ac50-d7a50aeb0c55-kube-api-access-t6t27" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266307 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="525b41b5-82d8-4d47-8350-79644a2c9360" volumeName="kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266320 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3610f08-aba1-411d-aa6d-811b88acdb7b" volumeName="kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266330 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8a7e077-3f6c-4efb-9865-cf82480c5da1" volumeName="kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-utilities" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266339 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef53432-93f5-4581-b3de-c8cc5cac2ecb" volumeName="kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266347 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3610f08-aba1-411d-aa6d-811b88acdb7b" volumeName="kubernetes.io/projected/c3610f08-aba1-411d-aa6d-811b88acdb7b-kube-api-access-jdgvx" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266357 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="211d123b-829c-49dd-b119-e172cab607cf" volumeName="kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266366 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3883b232-5772-460f-9e94-b4cbc7b7e638" volumeName="kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266376 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="561b7381-8439-4ccc-ac50-d7a50aeb0c55" volumeName="kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266385 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266396 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="561b7381-8439-4ccc-ac50-d7a50aeb0c55" volumeName="kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266405 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d80f71af-e3ff-4a9f-8c9c-883a6a5581d0" volumeName="kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266413 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e25a16f3-dfe0-49c5-a31d-e310d369f406" volumeName="kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266423 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266432 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70258988-8374-4aee-aaa2-be3c2e853062" volumeName="kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266441 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" volumeName="kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266451 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3883b232-5772-460f-9e94-b4cbc7b7e638" volumeName="kubernetes.io/projected/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-api-access-nfmmt" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266461 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56365780-b87d-43fc-95f5-8a44166aecf8" volumeName="kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266472 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d58c6b38-ef11-465c-9fee-b83b84ce4669" volumeName="kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-ca-certs" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266495 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="72756f50-c970-4ef6-b8ca-88e49f996a74" volumeName="kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-catalog-content" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266505 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fed75514-8f48-40b7-9fed-0afd6042cfbf" volumeName="kubernetes.io/secret/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-key" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266513 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-encryption-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266523 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" volumeName="kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266533 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" volumeName="kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266557 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a67ae8dc-240d-4708-9139-1d49c601e552" volumeName="kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266573 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a67ae8dc-240d-4708-9139-1d49c601e552" volumeName="kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266585 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd1425b9-fcd1-4aba-899f-e110eebce626" volumeName="kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266597 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cef53432-93f5-4581-b3de-c8cc5cac2ecb" volumeName="kubernetes.io/projected/cef53432-93f5-4581-b3de-c8cc5cac2ecb-kube-api-access-sm9vh" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266614 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14438c84-72d3-4f45-88a4-fc7e80df5fb8" volumeName="kubernetes.io/projected/14438c84-72d3-4f45-88a4-fc7e80df5fb8-kube-api-access-dfdkb" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266627 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f81774a-22a4-4335-961b-04e53e0f3b5e" volumeName="kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266639 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" volumeName="kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266653 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0adaea87-67d0-41a7-a1f3-855fdd483aca" volumeName="kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266665 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266677 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c222998f-6211-4466-8ad7-5d9fcfb10789" volumeName="kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266688 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e46a34-8a00-4bb3-869b-8a5911ef6cf8" volumeName="kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266697 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5" volumeName="kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-catalog-content" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266706 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="60683578-6673-4aff-b1d5-3167d534ac08" volumeName="kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266715 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dde1a2d9-a43e-4b26-82d7-e0f83577468f" volumeName="kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-tmp" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266725 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e25a16f3-dfe0-49c5-a31d-e310d369f406" volumeName="kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266738 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14438c84-72d3-4f45-88a4-fc7e80df5fb8" volumeName="kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266748 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57227a66-c758-4a46-a5e1-f603baa3f570" volumeName="kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266757 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" volumeName="kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266766 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266774 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266878 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266893 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" volumeName="kubernetes.io/projected/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b-kube-api-access-k5hmg" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266909 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" volumeName="kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266920 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55440bf9-0881-4823-af64-5652c2ad89ff" volumeName="kubernetes.io/empty-dir/55440bf9-0881-4823-af64-5652c2ad89ff-tmpfs" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266929 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" volumeName="kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266938 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" volumeName="kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-kube-api-access-rrmjf" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266946 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e" volumeName="kubernetes.io/configmap/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-mcd-auth-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266955 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55440bf9-0881-4823-af64-5652c2ad89ff" volumeName="kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-webhook-cert" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266964 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d58c6b38-ef11-465c-9fee-b83b84ce4669" volumeName="kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-kube-api-access-bs6m8" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266973 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b42aee2f-bffc-4c43-bf20-16d9c67d216c" volumeName="kubernetes.io/projected/b42aee2f-bffc-4c43-bf20-16d9c67d216c-kube-api-access-lbvbr" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266983 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="561b7381-8439-4ccc-ac50-d7a50aeb0c55" volumeName="kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.266991 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56365780-b87d-43fc-95f5-8a44166aecf8" volumeName="kubernetes.io/configmap/56365780-b87d-43fc-95f5-8a44166aecf8-config-volume" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267000 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" volumeName="kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267009 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="561b7381-8439-4ccc-ac50-d7a50aeb0c55" volumeName="kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267018 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cc45721-c05b-4161-91d9-d65cf6ec61d4" volumeName="kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267027 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d58c6b38-ef11-465c-9fee-b83b84ce4669" volumeName="kubernetes.io/empty-dir/d58c6b38-ef11-465c-9fee-b83b84ce4669-cache" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267037 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-trusted-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267053 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70258988-8374-4aee-aaa2-be3c2e853062" volumeName="kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267061 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" volumeName="kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267070 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b230b9d-529c-4b28-bc73-659a28d7961a" volumeName="kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267079 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267088 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e" volumeName="kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267097 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" volumeName="kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267106 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676f4062-ea34-48d0-80d7-3cd3d9da341e" volumeName="kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267114 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" volumeName="kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267123 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7" seLinuxMountContext="" Mar 19 09:33:35.267359 master-0 kubenswrapper[27819]: I0319 09:33:35.267133 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67e5534b-f428-45cf-b54e-d06b25dc3e09" volumeName="kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267143 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cc190d-5647-40a1-bfe9-5355bcb33b10" volumeName="kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267151 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" volumeName="kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267172 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14ee9a22-5b04-402c-98e9-35e2eb7cb2a2" volumeName="kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267182 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3610f08-aba1-411d-aa6d-811b88acdb7b" volumeName="kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267192 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267201 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="47da8964-3606-4181-87fb-8f04a3065295" volumeName="kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267210 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff5aeea-f859-4e38-bf1c-9e730025c212" volumeName="kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267219 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7825a2ac-eab6-4988-861a-9e3bfdf5dcc8" volumeName="kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267229 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" volumeName="kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267238 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3883b232-5772-460f-9e94-b4cbc7b7e638" volumeName="kubernetes.io/empty-dir/3883b232-5772-460f-9e94-b4cbc7b7e638-volume-directive-shadow" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267246 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267255 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267263 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff5aeea-f859-4e38-bf1c-9e730025c212" volumeName="kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267271 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0491730-604c-4a66-b827-458da88d262b" volumeName="kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267279 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8a7e077-3f6c-4efb-9865-cf82480c5da1" volumeName="kubernetes.io/projected/e8a7e077-3f6c-4efb-9865-cf82480c5da1-kube-api-access-mncvz" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267289 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" volumeName="kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267296 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e46a34-8a00-4bb3-869b-8a5911ef6cf8" volumeName="kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267307 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5" volumeName="kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-utilities" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267318 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14ee9a22-5b04-402c-98e9-35e2eb7cb2a2" volumeName="kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267329 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" volumeName="kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267341 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1b230b9d-529c-4b28-bc73-659a28d7961a" volumeName="kubernetes.io/projected/1b230b9d-529c-4b28-bc73-659a28d7961a-kube-api-access-mxz2j" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267353 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" volumeName="kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267363 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="72756f50-c970-4ef6-b8ca-88e49f996a74" volumeName="kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-utilities" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267373 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267382 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7825a2ac-eab6-4988-861a-9e3bfdf5dcc8" volumeName="kubernetes.io/projected/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-kube-api-access-s9tpx" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267391 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58fbf09a-3a26-45ab-8496-11d05c27e9cf" volumeName="kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267400 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5" volumeName="kubernetes.io/projected/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-kube-api-access-g8p7b" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267411 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ded5da9a-1447-46df-a8ff-ffd469562599" volumeName="kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267421 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f81774a-22a4-4335-961b-04e53e0f3b5e" volumeName="kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267432 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f81774a-22a4-4335-961b-04e53e0f3b5e" volumeName="kubernetes.io/projected/3f81774a-22a4-4335-961b-04e53e0f3b5e-kube-api-access-2w48g" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267441 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57227a66-c758-4a46-a5e1-f603baa3f570" volumeName="kubernetes.io/projected/57227a66-c758-4a46-a5e1-f603baa3f570-kube-api-access-flln7" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267451 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267461 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" volumeName="kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267471 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d504cbc7-5c09-4712-9f7a-c41a6386ef79" volumeName="kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-utilities" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267479 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" volumeName="kubernetes.io/empty-dir/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-cache" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267492 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" volumeName="kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-ca-certs" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267503 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14ee9a22-5b04-402c-98e9-35e2eb7cb2a2" volumeName="kubernetes.io/projected/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-kube-api-access-7g2ng" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267512 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" volumeName="kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267521 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53bff8e4-bf60-4386-8905-49d43fd6c420" volumeName="kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267531 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7825a2ac-eab6-4988-861a-9e3bfdf5dcc8" volumeName="kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267558 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" volumeName="kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267571 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c222998f-6211-4466-8ad7-5d9fcfb10789" volumeName="kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267585 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" volumeName="kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267596 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14ee9a22-5b04-402c-98e9-35e2eb7cb2a2" volumeName="kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267605 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" volumeName="kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267615 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55440bf9-0881-4823-af64-5652c2ad89ff" volumeName="kubernetes.io/projected/55440bf9-0881-4823-af64-5652c2ad89ff-kube-api-access-gtjps" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267624 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="67e5534b-f428-45cf-b54e-d06b25dc3e09" volumeName="kubernetes.io/configmap/67e5534b-f428-45cf-b54e-d06b25dc3e09-mcc-auth-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267634 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="72756f50-c970-4ef6-b8ca-88e49f996a74" volumeName="kubernetes.io/projected/72756f50-c970-4ef6-b8ca-88e49f996a74-kube-api-access-zxn9l" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267643 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0491730-604c-4a66-b827-458da88d262b" volumeName="kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267654 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="672ad0aa-a0c5-4640-840d-3ffa02c55d62" volumeName="kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267663 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70e8c62b-97c3-4c0c-85d3-f660118831fd" volumeName="kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267672 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" volumeName="kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267682 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c222998f-6211-4466-8ad7-5d9fcfb10789" volumeName="kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267691 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e46a34-8a00-4bb3-869b-8a5911ef6cf8" volumeName="kubernetes.io/empty-dir/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-textfile" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267699 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a07456d-2e8e-4e80-a777-d0903ad21f07" volumeName="kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267708 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3eeb72c3-1a56-4955-845e-81607513b1b2" volumeName="kubernetes.io/projected/3eeb72c3-1a56-4955-845e-81607513b1b2-kube-api-access-jns5r" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267716 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1669b77c-4bef-42d5-ad0b-63c12a6677b2" volumeName="kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267725 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" volumeName="kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267734 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a67ae8dc-240d-4708-9139-1d49c601e552" volumeName="kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267744 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dde1a2d9-a43e-4b26-82d7-e0f83577468f" volumeName="kubernetes.io/projected/dde1a2d9-a43e-4b26-82d7-e0f83577468f-kube-api-access-npg9k" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267753 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267762 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe1881fb-c670-442a-a092-c1eee6b7d5e5" volumeName="kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267771 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cc190d-5647-40a1-bfe9-5355bcb33b10" volumeName="kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267780 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31e46a34-8a00-4bb3-869b-8a5911ef6cf8" volumeName="kubernetes.io/projected/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-kube-api-access-ssdjz" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267790 27819 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="676f4062-ea34-48d0-80d7-3cd3d9da341e" volumeName="kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l" seLinuxMountContext="" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267799 27819 reconstruct.go:97] "Volume reconstruction finished" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.267806 27819 reconciler.go:26] "Reconciler: start to sync state" Mar 19 09:33:35.275104 master-0 kubenswrapper[27819]: I0319 09:33:35.270926 27819 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 19 09:33:35.278089 master-0 kubenswrapper[27819]: I0319 09:33:35.275689 27819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 09:33:35.278850 master-0 kubenswrapper[27819]: I0319 09:33:35.278471 27819 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 09:33:35.278850 master-0 kubenswrapper[27819]: I0319 09:33:35.278525 27819 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 19 09:33:35.278850 master-0 kubenswrapper[27819]: I0319 09:33:35.278567 27819 kubelet.go:2335] "Starting kubelet main sync loop" Mar 19 09:33:35.278850 master-0 kubenswrapper[27819]: E0319 09:33:35.278626 27819 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 09:33:35.280613 master-0 kubenswrapper[27819]: I0319 09:33:35.280537 27819 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 09:33:35.293041 master-0 kubenswrapper[27819]: I0319 09:33:35.292609 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_fe4e3a0b-973b-4534-b91c-1e870e4e5c32/installer/0.log" Mar 19 09:33:35.293041 master-0 kubenswrapper[27819]: I0319 09:33:35.292655 27819 generic.go:334] "Generic (PLEG): container finished" podID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerID="4772110931eb3a91b47fd2a5b7d728bb53faceca1654dd37bae708926fff76ac" exitCode=1 Mar 19 09:33:35.297658 master-0 kubenswrapper[27819]: I0319 09:33:35.297571 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-6vplt_16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff/openshift-controller-manager-operator/1.log" Mar 19 09:33:35.297785 master-0 kubenswrapper[27819]: I0319 09:33:35.297621 27819 generic.go:334] "Generic (PLEG): container finished" podID="16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff" containerID="620239dc4a60804d8418bde885755ec6483c00980113b997aa1fddf56697d09e" exitCode=255 Mar 19 09:33:35.320005 master-0 kubenswrapper[27819]: I0319 09:33:35.319941 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-blgk8_de72ea6c-f3ce-41a5-9a43-9db4f27ed84b/snapshot-controller/3.log" Mar 19 09:33:35.320196 master-0 kubenswrapper[27819]: I0319 09:33:35.320034 27819 generic.go:334] "Generic (PLEG): container finished" podID="de72ea6c-f3ce-41a5-9a43-9db4f27ed84b" containerID="2f1679234e3694d80243a02d6a6d57a153a01b0b633d91582b95afeeb92e4ab4" exitCode=1 Mar 19 09:33:35.326946 master-0 kubenswrapper[27819]: I0319 09:33:35.326912 27819 generic.go:334] "Generic (PLEG): container finished" podID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerID="4f7ae82c42fcdc2525bbc875f58985f627c3385f9956bdf7d697087dac6e3a2f" exitCode=0 Mar 19 09:33:35.331936 master-0 kubenswrapper[27819]: I0319 09:33:35.331702 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-p4hvm_7825a2ac-eab6-4988-861a-9e3bfdf5dcc8/cluster-autoscaler-operator/0.log" Mar 19 09:33:35.332142 master-0 kubenswrapper[27819]: I0319 09:33:35.332110 27819 generic.go:334] "Generic (PLEG): container finished" podID="7825a2ac-eab6-4988-861a-9e3bfdf5dcc8" containerID="d53ad972361319c74f326b3096df26b027816cd81f61b8b72dac0988e8a98e3b" exitCode=255 Mar 19 09:33:35.335144 master-0 kubenswrapper[27819]: I0319 09:33:35.335113 27819 generic.go:334] "Generic (PLEG): container finished" podID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerID="a42491788debafa4b5caebd582505d3e959b8406cff2a3c8d4b9e3e0ecd564e8" exitCode=0 Mar 19 09:33:35.335144 master-0 kubenswrapper[27819]: I0319 09:33:35.335141 27819 generic.go:334] "Generic (PLEG): container finished" podID="d66c30b6-67ad-4864-8b51-0424d462ac98" containerID="df015e37363b9eb628b8a08ca5e9d7aac56b16bc451c8914eb82e1273a54c66d" exitCode=0 Mar 19 09:33:35.337042 master-0 kubenswrapper[27819]: I0319 09:33:35.337008 27819 generic.go:334] "Generic (PLEG): container finished" podID="012cdc1d-ebc8-431e-9a52-9a39de95dd0d" containerID="121fbce462a7eafb62e39e83f1f28d2288860d27710d3e9a06350c53d4d1dd76" exitCode=0 Mar 19 09:33:35.339325 master-0 kubenswrapper[27819]: I0319 09:33:35.339293 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/config-sync-controllers/0.log" Mar 19 09:33:35.339898 master-0 kubenswrapper[27819]: I0319 09:33:35.339867 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-sbgz2_c3610f08-aba1-411d-aa6d-811b88acdb7b/cluster-cloud-controller-manager/0.log" Mar 19 09:33:35.339976 master-0 kubenswrapper[27819]: I0319 09:33:35.339915 27819 generic.go:334] "Generic (PLEG): container finished" podID="c3610f08-aba1-411d-aa6d-811b88acdb7b" containerID="ccbf8c179749d131ecca685672edda794d3d9e56e155b18ba174f1ad15f4ce67" exitCode=1 Mar 19 09:33:35.339976 master-0 kubenswrapper[27819]: I0319 09:33:35.339936 27819 generic.go:334] "Generic (PLEG): container finished" podID="c3610f08-aba1-411d-aa6d-811b88acdb7b" containerID="774e8a3e480c092251698110fbb5b53d79965d955c1c4ce2867552029267208f" exitCode=1 Mar 19 09:33:35.348082 master-0 kubenswrapper[27819]: I0319 09:33:35.348027 27819 generic.go:334] "Generic (PLEG): container finished" podID="67e5534b-f428-45cf-b54e-d06b25dc3e09" containerID="c9951e834eac9fa8b70d5e1fa9bb37afc3d9012f0b6806bedca4371ec18ecd3e" exitCode=0 Mar 19 09:33:35.350352 master-0 kubenswrapper[27819]: I0319 09:33:35.350311 27819 generic.go:334] "Generic (PLEG): container finished" podID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerID="9b72a735e8178867a7e32af1f6ff03d583d0af440844ffb7c12f63cbd3f26349" exitCode=0 Mar 19 09:33:35.356287 master-0 kubenswrapper[27819]: I0319 09:33:35.356249 27819 generic.go:334] "Generic (PLEG): container finished" podID="d6cd2eac-6412-4f38-8272-743c67b218a3" containerID="405f9880ce91d786192d330c1e84c542474ebb205faf0f516cd0ea59e7fb46ac" exitCode=0 Mar 19 09:33:35.357769 master-0 kubenswrapper[27819]: I0319 09:33:35.357746 27819 generic.go:334] "Generic (PLEG): container finished" podID="1187ddcd-3b78-4b3f-9b12-06ce76cb6040" containerID="fcb63173a1674e9ce9fc5d4b055442992b282a4bd8e174a8bafa997bfbff21e0" exitCode=0 Mar 19 09:33:35.359342 master-0 kubenswrapper[27819]: I0319 09:33:35.359308 27819 generic.go:334] "Generic (PLEG): container finished" podID="5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5" containerID="7317e2fe3007499f7bd9b22966e52c4d2f14f432eb7fd09f964544754c6d642d" exitCode=0 Mar 19 09:33:35.359392 master-0 kubenswrapper[27819]: I0319 09:33:35.359343 27819 generic.go:334] "Generic (PLEG): container finished" podID="5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5" containerID="be8bce13d740e2f9e98bf0d2d8675ba153adc7ecfb63753dde92f39709976021" exitCode=0 Mar 19 09:33:35.366720 master-0 kubenswrapper[27819]: I0319 09:33:35.366682 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 19 09:33:35.367138 master-0 kubenswrapper[27819]: I0319 09:33:35.367099 27819 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644" exitCode=1 Mar 19 09:33:35.367138 master-0 kubenswrapper[27819]: I0319 09:33:35.367131 27819 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="b213f6d8da0d4384e45f89c17fb5962fd352a3cea0a7f3f8261c476ba746dbca" exitCode=0 Mar 19 09:33:35.369498 master-0 kubenswrapper[27819]: I0319 09:33:35.369464 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_20ba861f-a073-4d60-9136-041c2e98dd0f/installer/0.log" Mar 19 09:33:35.369618 master-0 kubenswrapper[27819]: I0319 09:33:35.369511 27819 generic.go:334] "Generic (PLEG): container finished" podID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerID="ab53721c199f233bd43c54da36cf0743a555ab62518f114872a0db72d2d2af5a" exitCode=1 Mar 19 09:33:35.371882 master-0 kubenswrapper[27819]: I0319 09:33:35.371861 27819 generic.go:334] "Generic (PLEG): container finished" podID="17e0cb4a-e776-4886-927e-ae446af7f234" containerID="c30f2036341c158a4a311a14ce582436d41a1a42842791b6c421ca4a779f1492" exitCode=0 Mar 19 09:33:35.371882 master-0 kubenswrapper[27819]: I0319 09:33:35.371881 27819 generic.go:334] "Generic (PLEG): container finished" podID="17e0cb4a-e776-4886-927e-ae446af7f234" containerID="a77554a501a64db0cbf8b7e5fc03fd9507d3d6aa78d1ae228437911712e2adbe" exitCode=0 Mar 19 09:33:35.371973 master-0 kubenswrapper[27819]: I0319 09:33:35.371892 27819 generic.go:334] "Generic (PLEG): container finished" podID="17e0cb4a-e776-4886-927e-ae446af7f234" containerID="2e89abc0f17fc465edcdc9ff26f6e87d57f135c537e0e0141992b6c68f2869ef" exitCode=0 Mar 19 09:33:35.378514 master-0 kubenswrapper[27819]: I0319 09:33:35.378486 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-rgzxb_d58c6b38-ef11-465c-9fee-b83b84ce4669/manager/1.log" Mar 19 09:33:35.378794 master-0 kubenswrapper[27819]: E0319 09:33:35.378777 27819 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 09:33:35.378856 master-0 kubenswrapper[27819]: I0319 09:33:35.378807 27819 generic.go:334] "Generic (PLEG): container finished" podID="d58c6b38-ef11-465c-9fee-b83b84ce4669" containerID="dcf8449f5d3f1db5b4898f4c8c2b4608a599a27e519d404d772fcd47ce167dc0" exitCode=1 Mar 19 09:33:35.382427 master-0 kubenswrapper[27819]: I0319 09:33:35.382394 27819 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="c8bff62b4e05425e80c7e14b2ad4d089fe60c7b7e27feb3cfc2b1fde8c062902" exitCode=0 Mar 19 09:33:35.382427 master-0 kubenswrapper[27819]: I0319 09:33:35.382425 27819 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="58f2d035e725f793e501aa00d5cd6dec60187d755b95ed0332885f977a2d1232" exitCode=0 Mar 19 09:33:35.382427 master-0 kubenswrapper[27819]: I0319 09:33:35.382434 27819 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="86d7bf6f8a152beed53ca9a59153f0d5628c8aeeca38c4e7133940d1c9f346af" exitCode=0 Mar 19 09:33:35.382617 master-0 kubenswrapper[27819]: I0319 09:33:35.382443 27819 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="20b5e36de175a38e8938a8e709cd8fa1a5177137ac9ceff4b103028234492d38" exitCode=0 Mar 19 09:33:35.382617 master-0 kubenswrapper[27819]: I0319 09:33:35.382451 27819 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="575ffdeb036bb96884333ecfd381cd08c10d745628010252b611aaa18d03bb88" exitCode=0 Mar 19 09:33:35.382617 master-0 kubenswrapper[27819]: I0319 09:33:35.382457 27819 generic.go:334] "Generic (PLEG): container finished" podID="60683578-6673-4aff-b1d5-3167d534ac08" containerID="be5668fe1c571dde1e396c091e4c7ec37d88531f9ac3613886b71274efe031c6" exitCode=0 Mar 19 09:33:35.384358 master-0 kubenswrapper[27819]: I0319 09:33:35.384340 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/4.log" Mar 19 09:33:35.384712 master-0 kubenswrapper[27819]: I0319 09:33:35.384694 27819 generic.go:334] "Generic (PLEG): container finished" podID="8bdeb4f3-99f7-44ef-beac-53c3cc073c5a" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" exitCode=1 Mar 19 09:33:35.386632 master-0 kubenswrapper[27819]: I0319 09:33:35.386603 27819 generic.go:334] "Generic (PLEG): container finished" podID="53bff8e4-bf60-4386-8905-49d43fd6c420" containerID="63daec6a7a54ee857885e15f0afbbf6fb5689d16eaffe329ad8c85a73d06000a" exitCode=0 Mar 19 09:33:35.390636 master-0 kubenswrapper[27819]: I0319 09:33:35.390590 27819 generic.go:334] "Generic (PLEG): container finished" podID="98826625-8de0-4bf7-8926-ec62517369e5" containerID="47f63f0f88f52262ec4bb448c720e1d131874e1c77a757276ce8eb2d6c24cab5" exitCode=0 Mar 19 09:33:35.395268 master-0 kubenswrapper[27819]: I0319 09:33:35.395229 27819 generic.go:334] "Generic (PLEG): container finished" podID="1e14d946-54b8-4a3d-ae9f-ae82c5393ad4" containerID="c28fd5198d7f8466f8d4a9327cbc9eb5d80742ce9844b91bf8ba1a1a20dc6eae" exitCode=0 Mar 19 09:33:35.399305 master-0 kubenswrapper[27819]: I0319 09:33:35.399276 27819 generic.go:334] "Generic (PLEG): container finished" podID="31e46a34-8a00-4bb3-869b-8a5911ef6cf8" containerID="cb34860bbc4a03b9ef51077399d9cc73004aea17ddd2a3769650b04afea52e7b" exitCode=0 Mar 19 09:33:35.404092 master-0 kubenswrapper[27819]: I0319 09:33:35.404041 27819 generic.go:334] "Generic (PLEG): container finished" podID="1669b77c-4bef-42d5-ad0b-63c12a6677b2" containerID="8fdd3e54be9275c8b5b5e2dc371f021c349c7ee0ec07fc61904fbdf75b35b7e2" exitCode=0 Mar 19 09:33:35.405708 master-0 kubenswrapper[27819]: I0319 09:33:35.405683 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-gkvf5_bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c/network-operator/0.log" Mar 19 09:33:35.405777 master-0 kubenswrapper[27819]: I0319 09:33:35.405723 27819 generic.go:334] "Generic (PLEG): container finished" podID="bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c" containerID="794857861e41452767f8150da770c0fdb6415a1b4c58da2ca5c6bb1b5694eb77" exitCode=255 Mar 19 09:33:35.407959 master-0 kubenswrapper[27819]: I0319 09:33:35.407928 27819 generic.go:334] "Generic (PLEG): container finished" podID="70258988-8374-4aee-aaa2-be3c2e853062" containerID="3e4b6d4a6ba7dc16d944e3b9eee5d338268651e600b3b4017cd71ee472e3564c" exitCode=0 Mar 19 09:33:35.410978 master-0 kubenswrapper[27819]: I0319 09:33:35.410933 27819 generic.go:334] "Generic (PLEG): container finished" podID="525b41b5-82d8-4d47-8350-79644a2c9360" containerID="70d174fd4e01098348af77daa0e495ddb88708e136a02b054e3fa91916dd11b3" exitCode=0 Mar 19 09:33:35.413666 master-0 kubenswrapper[27819]: I0319 09:33:35.413629 27819 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="4ff4b935126cc5d750c1d850d7bd8bc2f70fd6fa92c703e7c39a069db8572af3" exitCode=0 Mar 19 09:33:35.413746 master-0 kubenswrapper[27819]: I0319 09:33:35.413667 27819 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="0ea74be9ce6a8db82cc76cb8b1abbace62eee2a97494f9a8b0c0af4311285f49" exitCode=0 Mar 19 09:33:35.413746 master-0 kubenswrapper[27819]: I0319 09:33:35.413685 27819 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="00add47a2cdec59c3ac383946429a4dc013519a6933bbb0d7ebdd58eb0eb7186" exitCode=0 Mar 19 09:33:35.415758 master-0 kubenswrapper[27819]: I0319 09:33:35.415726 27819 generic.go:334] "Generic (PLEG): container finished" podID="46c7cde3-2cb4-4fa8-94ca-d5feff877da9" containerID="e9208fca3070b80809292873e901e7513b6e0cbe29792fde8a62dcde9ce791be" exitCode=0 Mar 19 09:33:35.418975 master-0 kubenswrapper[27819]: I0319 09:33:35.418948 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-check-endpoints/0.log" Mar 19 09:33:35.420039 master-0 kubenswrapper[27819]: I0319 09:33:35.420015 27819 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="756374cfad040ab2f111ee5526fff718384e34314b3022f03afd3502143ed50c" exitCode=255 Mar 19 09:33:35.420039 master-0 kubenswrapper[27819]: I0319 09:33:35.420037 27819 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244" exitCode=0 Mar 19 09:33:35.421233 master-0 kubenswrapper[27819]: I0319 09:33:35.421195 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_e3ab0802-da8a-475c-a707-09f7838f580b/installer/0.log" Mar 19 09:33:35.421307 master-0 kubenswrapper[27819]: I0319 09:33:35.421233 27819 generic.go:334] "Generic (PLEG): container finished" podID="e3ab0802-da8a-475c-a707-09f7838f580b" containerID="a1c35003004ca85e3194260594ce7980c9cfead4c46c7a6e5e65ede51128fa87" exitCode=1 Mar 19 09:33:35.422435 master-0 kubenswrapper[27819]: I0319 09:33:35.422390 27819 generic.go:334] "Generic (PLEG): container finished" podID="ded5da9a-1447-46df-a8ff-ffd469562599" containerID="5c0d59f8ce099c748a661f116e21ac9ceeb2f5758bc6d56b40e89d6cb4480b2d" exitCode=0 Mar 19 09:33:35.423672 master-0 kubenswrapper[27819]: I0319 09:33:35.423638 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-9jbdl_cd1425b9-fcd1-4aba-899f-e110eebce626/machine-api-operator/0.log" Mar 19 09:33:35.423891 master-0 kubenswrapper[27819]: I0319 09:33:35.423856 27819 generic.go:334] "Generic (PLEG): container finished" podID="cd1425b9-fcd1-4aba-899f-e110eebce626" containerID="e58b99f4da3ded2a286482407189e580812fbd5fde61313a0d8876d046001408" exitCode=255 Mar 19 09:33:35.425262 master-0 kubenswrapper[27819]: I0319 09:33:35.425226 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-ttn8h_14ee9a22-5b04-402c-98e9-35e2eb7cb2a2/machine-approver-controller/0.log" Mar 19 09:33:35.425510 master-0 kubenswrapper[27819]: I0319 09:33:35.425475 27819 generic.go:334] "Generic (PLEG): container finished" podID="14ee9a22-5b04-402c-98e9-35e2eb7cb2a2" containerID="11c939b60a227283973184abab4a74f274bf3ad0ae2f5315dbbcb266dc260e1c" exitCode=255 Mar 19 09:33:35.431765 master-0 kubenswrapper[27819]: I0319 09:33:35.431722 27819 generic.go:334] "Generic (PLEG): container finished" podID="e5780efa-c56a-4953-807f-6a51efc91b09" containerID="bf5e3834612c0d4b8b32cfe23c6154f92dbaf9ab5151f44cf79b7b61c3d85739" exitCode=0 Mar 19 09:33:35.433328 master-0 kubenswrapper[27819]: I0319 09:33:35.433281 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-chzwl_cef53432-93f5-4581-b3de-c8cc5cac2ecb/control-plane-machine-set-operator/0.log" Mar 19 09:33:35.433394 master-0 kubenswrapper[27819]: I0319 09:33:35.433341 27819 generic.go:334] "Generic (PLEG): container finished" podID="cef53432-93f5-4581-b3de-c8cc5cac2ecb" containerID="bc9135aad8b62aff6fca98f88f979a784539469fc0e4b4ef505d6e449c8e8562" exitCode=1 Mar 19 09:33:35.435348 master-0 kubenswrapper[27819]: I0319 09:33:35.435321 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-8mpp9_a57648b5-1a08-49a7-bedb-f7c1e54d92b4/cluster-node-tuning-operator/0.log" Mar 19 09:33:35.435613 master-0 kubenswrapper[27819]: I0319 09:33:35.435358 27819 generic.go:334] "Generic (PLEG): container finished" podID="a57648b5-1a08-49a7-bedb-f7c1e54d92b4" containerID="8877b45464c5376d1635f878edec2b26c0ed093e8a5de4899f80eaf0d08390b4" exitCode=1 Mar 19 09:33:35.440666 master-0 kubenswrapper[27819]: I0319 09:33:35.440634 27819 generic.go:334] "Generic (PLEG): container finished" podID="cdcc18f9-66cf-45d9-965d-d0a57fcf285c" containerID="cfed02ef0a3bee4084b5a5748407cbaeafff5b6fc759f0c7f9bdc76ec5af9ce1" exitCode=0 Mar 19 09:33:35.451703 master-0 kubenswrapper[27819]: I0319 09:33:35.451588 27819 generic.go:334] "Generic (PLEG): container finished" podID="d504cbc7-5c09-4712-9f7a-c41a6386ef79" containerID="419f8f2138b335bf2ff24f15ef8dc0bc95062c30db2e877001baa8ff122cf0b9" exitCode=0 Mar 19 09:33:35.451703 master-0 kubenswrapper[27819]: I0319 09:33:35.451626 27819 generic.go:334] "Generic (PLEG): container finished" podID="d504cbc7-5c09-4712-9f7a-c41a6386ef79" containerID="213d80691bc22d27ded0500e6da740b91a8a30071ba53b718db2c03bf881bbb2" exitCode=0 Mar 19 09:33:35.454123 master-0 kubenswrapper[27819]: I0319 09:33:35.454007 27819 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="53fac99b9b6d7113ded13db31c06fb6988d91b7900890060d24517f7c6a3af61" exitCode=0 Mar 19 09:33:35.454123 master-0 kubenswrapper[27819]: I0319 09:33:35.454036 27819 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="95ac7f362ef5d31be76e509ce342250794db8fc83ad49a811e1f5659d7238a79" exitCode=0 Mar 19 09:33:35.454123 master-0 kubenswrapper[27819]: I0319 09:33:35.454044 27819 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="48d42851ba5e1a1222e1f2eb24f68210235c910ac77423fe9def29b71929e2f4" exitCode=2 Mar 19 09:33:35.454123 master-0 kubenswrapper[27819]: I0319 09:33:35.454053 27819 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="e6c6a6b2ffdb2a6ceaac069cb1bbfd1fd6ab268976108284249a62d330f8ad4e" exitCode=0 Mar 19 09:33:35.455628 master-0 kubenswrapper[27819]: I0319 09:33:35.455573 27819 generic.go:334] "Generic (PLEG): container finished" podID="561b7381-8439-4ccc-ac50-d7a50aeb0c55" containerID="c42177f0a6bfccde75c92bce6a5608676cc3c57606fa245b67568e6ea94f8cb0" exitCode=0 Mar 19 09:33:35.458614 master-0 kubenswrapper[27819]: I0319 09:33:35.458410 27819 generic.go:334] "Generic (PLEG): container finished" podID="e8a7e077-3f6c-4efb-9865-cf82480c5da1" containerID="5fddf80528eded5db90a5a83bb8c3ef48b97513cb9fb2edabfb6e5774bd7a4dc" exitCode=0 Mar 19 09:33:35.458614 master-0 kubenswrapper[27819]: I0319 09:33:35.458445 27819 generic.go:334] "Generic (PLEG): container finished" podID="e8a7e077-3f6c-4efb-9865-cf82480c5da1" containerID="84f6a11d7eb8cf18422a0a99e6bf0998baa0e9649ec5853b155cb1c537b44211" exitCode=0 Mar 19 09:33:35.459960 master-0 kubenswrapper[27819]: I0319 09:33:35.459941 27819 generic.go:334] "Generic (PLEG): container finished" podID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerID="d34b15333e7215221eb3166bafa905cc720923c5b54182dc9d2d804528d9b642" exitCode=0 Mar 19 09:33:35.462287 master-0 kubenswrapper[27819]: I0319 09:33:35.462270 27819 generic.go:334] "Generic (PLEG): container finished" podID="70e8c62b-97c3-4c0c-85d3-f660118831fd" containerID="94266d3ee00efef455e5ca3d3eb8a84654ff4253832f76d5e3f187a6614b2325" exitCode=0 Mar 19 09:33:35.465424 master-0 kubenswrapper[27819]: I0319 09:33:35.465364 27819 generic.go:334] "Generic (PLEG): container finished" podID="fe1881fb-c670-442a-a092-c1eee6b7d5e5" containerID="f4bffeec1cd2a6c9d1bd3d0557a50165f71cd47937001ed7d994ee96e6f4f2fd" exitCode=0 Mar 19 09:33:35.483172 master-0 kubenswrapper[27819]: I0319 09:33:35.483137 27819 generic.go:334] "Generic (PLEG): container finished" podID="b9969717-8350-416e-8711-877cdf557d81" containerID="e03d771886973476dcc44da1c43c397db09c499968945f5153359a0c06bc98ab" exitCode=0 Mar 19 09:33:35.490124 master-0 kubenswrapper[27819]: I0319 09:33:35.490083 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ec98e408-a574-40eb-b84d-111edbaab81a/installer/0.log" Mar 19 09:33:35.490308 master-0 kubenswrapper[27819]: I0319 09:33:35.490136 27819 generic.go:334] "Generic (PLEG): container finished" podID="ec98e408-a574-40eb-b84d-111edbaab81a" containerID="bad1a4ade656dc88a2ff2cedf66c5fd93d2a5c35714abd9bee1ca36e672bdec3" exitCode=1 Mar 19 09:33:35.492674 master-0 kubenswrapper[27819]: I0319 09:33:35.492636 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-sw7cc_3a07456d-2e8e-4e80-a777-d0903ad21f07/cluster-baremetal-operator/1.log" Mar 19 09:33:35.493032 master-0 kubenswrapper[27819]: I0319 09:33:35.492993 27819 generic.go:334] "Generic (PLEG): container finished" podID="3a07456d-2e8e-4e80-a777-d0903ad21f07" containerID="42d7d82aba9e7b10269b85039d157d860181e8ade15cd12ada9b398768b2c3d9" exitCode=1 Mar 19 09:33:35.503866 master-0 kubenswrapper[27819]: I0319 09:33:35.503830 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-t7zwh_47da8964-3606-4181-87fb-8f04a3065295/approver/1.log" Mar 19 09:33:35.506196 master-0 kubenswrapper[27819]: I0319 09:33:35.504312 27819 generic.go:334] "Generic (PLEG): container finished" podID="47da8964-3606-4181-87fb-8f04a3065295" containerID="380db29610ce50b23d444ae24a9a82ff721513171d94f5e05240298cc4418dff" exitCode=1 Mar 19 09:33:35.508872 master-0 kubenswrapper[27819]: I0319 09:33:35.508765 27819 generic.go:334] "Generic (PLEG): container finished" podID="58fbf09a-3a26-45ab-8496-11d05c27e9cf" containerID="f7583682489ded760629cc15df0f0f40f6512cf0cba6d9c07d62c71cf5d0483d" exitCode=0 Mar 19 09:33:35.511135 master-0 kubenswrapper[27819]: I0319 09:33:35.511088 27819 generic.go:334] "Generic (PLEG): container finished" podID="a67ae8dc-240d-4708-9139-1d49c601e552" containerID="b5a43433ad01d4c8d725deb00c57fbbcb1186578ae1700355cef7f732ced844c" exitCode=0 Mar 19 09:33:35.513052 master-0 kubenswrapper[27819]: I0319 09:33:35.513017 27819 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="6ef69a9aa568c569e28a8cf9a8398ecd1d39a543a999398bc8742b280aa881bd" exitCode=0 Mar 19 09:33:35.513052 master-0 kubenswrapper[27819]: I0319 09:33:35.513040 27819 generic.go:334] "Generic (PLEG): container finished" podID="67658b93f6f5927402b87ec35623e46e" containerID="33fbab3dae4d95c59279d28953be3dee55bacb9a970231a9a8855ae0fd8f5ddd" exitCode=0 Mar 19 09:33:35.518227 master-0 kubenswrapper[27819]: I0319 09:33:35.518140 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-7v7bv_d5d9fbaf-ba14-4d2b-8376-1634eabbc782/manager/1.log" Mar 19 09:33:35.518488 master-0 kubenswrapper[27819]: I0319 09:33:35.518463 27819 generic.go:334] "Generic (PLEG): container finished" podID="d5d9fbaf-ba14-4d2b-8376-1634eabbc782" containerID="167cce93a07388fd74c14d6f7c9fcb3960b363bc259d8edc2e5ed4f902650640" exitCode=1 Mar 19 09:33:35.531863 master-0 kubenswrapper[27819]: I0319 09:33:35.531831 27819 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="f17b7ff6194fded74c25ab24964fb6d46dcd1d8e29da6ff5d4563dab4dd944c9" exitCode=0 Mar 19 09:33:35.532047 master-0 kubenswrapper[27819]: I0319 09:33:35.532029 27819 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="88aa2b34d394b9b72033dd41d87e96aa90f1022306b9040706a5972685dd778d" exitCode=0 Mar 19 09:33:35.532128 master-0 kubenswrapper[27819]: I0319 09:33:35.532112 27819 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="0861ab2fae00b28361f12c7b94fd6d71acf9a50d9f9e835730f83b9c6daaad52" exitCode=0 Mar 19 09:33:35.534185 master-0 kubenswrapper[27819]: I0319 09:33:35.534161 27819 generic.go:334] "Generic (PLEG): container finished" podID="fed75514-8f48-40b7-9fed-0afd6042cfbf" containerID="fb94cc236c27d9ae2255663fca024f5b90148e514af1cb8c7ed1eaef28fc1582" exitCode=0 Mar 19 09:33:35.535958 master-0 kubenswrapper[27819]: I0319 09:33:35.535913 27819 generic.go:334] "Generic (PLEG): container finished" podID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerID="542dee821bce6b00fb4a89381e02478e42521bf4fb5559fd959616b012db8e61" exitCode=0 Mar 19 09:33:35.537936 master-0 kubenswrapper[27819]: I0319 09:33:35.537910 27819 generic.go:334] "Generic (PLEG): container finished" podID="ca2f7cb3-8812-4fe3-83a5-61668ef87f99" containerID="685e4b432ade20b1c50ec1b3266543948892457d2831f66c3796f3777b544a6e" exitCode=0 Mar 19 09:33:35.547727 master-0 kubenswrapper[27819]: I0319 09:33:35.547683 27819 generic.go:334] "Generic (PLEG): container finished" podID="72756f50-c970-4ef6-b8ca-88e49f996a74" containerID="83740dd3b2371f1c2f87cb91c9ce70d31804fb806f7201939fdfefa35b3fbd84" exitCode=0 Mar 19 09:33:35.547727 master-0 kubenswrapper[27819]: I0319 09:33:35.547715 27819 generic.go:334] "Generic (PLEG): container finished" podID="72756f50-c970-4ef6-b8ca-88e49f996a74" containerID="2968fa6613c3b81628d2874b0078774e9f4bca1ed372f6d47f80eb1be0cf6041" exitCode=0 Mar 19 09:33:35.554006 master-0 kubenswrapper[27819]: I0319 09:33:35.553970 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-52j2b_e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc/package-server-manager/0.log" Mar 19 09:33:35.554341 master-0 kubenswrapper[27819]: I0319 09:33:35.554310 27819 generic.go:334] "Generic (PLEG): container finished" podID="e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc" containerID="8ba7304329f0a0ad38a3e444273ac007e5708e5106ec4cfd0157e01f42d39e4e" exitCode=1 Mar 19 09:33:35.558464 master-0 kubenswrapper[27819]: I0319 09:33:35.558409 27819 generic.go:334] "Generic (PLEG): container finished" podID="43fca1a4-4fa7-4a43-b9c4-7f50a8737643" containerID="ea1f7d359b6ee07950af03d5716d56f99f195491d0e7434e7ef9e53aca7d8ce6" exitCode=0 Mar 19 09:33:35.560410 master-0 kubenswrapper[27819]: I0319 09:33:35.560384 27819 generic.go:334] "Generic (PLEG): container finished" podID="57227a66-c758-4a46-a5e1-f603baa3f570" containerID="fd8bb80d426a5da3f781ac199d36ba296827076a405918db4a564ba51e18307a" exitCode=0 Mar 19 09:33:35.566215 master-0 kubenswrapper[27819]: I0319 09:33:35.566166 27819 generic.go:334] "Generic (PLEG): container finished" podID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerID="ef45d184d3bd520a4e4cf7302b2fbd38a0a7146b58fcce765edbe1eaa24e7615" exitCode=0 Mar 19 09:33:35.570600 master-0 kubenswrapper[27819]: I0319 09:33:35.570533 27819 generic.go:334] "Generic (PLEG): container finished" podID="c222998f-6211-4466-8ad7-5d9fcfb10789" containerID="9f898450aabd10f55a00aca1216b3ea60aa3a67621f1566bfc4bf787f1440f93" exitCode=0 Mar 19 09:33:35.581768 master-0 kubenswrapper[27819]: E0319 09:33:35.579052 27819 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 09:33:35.753427 master-0 kubenswrapper[27819]: I0319 09:33:35.753234 27819 manager.go:324] Recovery completed Mar 19 09:33:35.834852 master-0 kubenswrapper[27819]: I0319 09:33:35.834783 27819 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 19 09:33:35.834852 master-0 kubenswrapper[27819]: I0319 09:33:35.834828 27819 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 19 09:33:35.834852 master-0 kubenswrapper[27819]: I0319 09:33:35.834858 27819 state_mem.go:36] "Initialized new in-memory state store" Mar 19 09:33:35.835172 master-0 kubenswrapper[27819]: I0319 09:33:35.835131 27819 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 09:33:35.835212 master-0 kubenswrapper[27819]: I0319 09:33:35.835165 27819 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 09:33:35.835212 master-0 kubenswrapper[27819]: I0319 09:33:35.835200 27819 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 19 09:33:35.835212 master-0 kubenswrapper[27819]: I0319 09:33:35.835212 27819 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 19 09:33:35.835324 master-0 kubenswrapper[27819]: I0319 09:33:35.835224 27819 policy_none.go:49] "None policy: Start" Mar 19 09:33:35.838507 master-0 kubenswrapper[27819]: I0319 09:33:35.838466 27819 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 19 09:33:35.838507 master-0 kubenswrapper[27819]: I0319 09:33:35.838504 27819 state_mem.go:35] "Initializing new in-memory state store" Mar 19 09:33:35.838755 master-0 kubenswrapper[27819]: I0319 09:33:35.838724 27819 state_mem.go:75] "Updated machine memory state" Mar 19 09:33:35.838755 master-0 kubenswrapper[27819]: I0319 09:33:35.838745 27819 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 19 09:33:35.854695 master-0 kubenswrapper[27819]: I0319 09:33:35.854656 27819 manager.go:334] "Starting Device Plugin manager" Mar 19 09:33:35.854821 master-0 kubenswrapper[27819]: I0319 09:33:35.854745 27819 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 09:33:35.854821 master-0 kubenswrapper[27819]: I0319 09:33:35.854762 27819 server.go:79] "Starting device plugin registration server" Mar 19 09:33:35.855270 master-0 kubenswrapper[27819]: I0319 09:33:35.855193 27819 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 09:33:35.855270 master-0 kubenswrapper[27819]: I0319 09:33:35.855215 27819 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 09:33:35.859208 master-0 kubenswrapper[27819]: I0319 09:33:35.855404 27819 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 19 09:33:35.859208 master-0 kubenswrapper[27819]: I0319 09:33:35.855515 27819 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 19 09:33:35.859208 master-0 kubenswrapper[27819]: I0319 09:33:35.855525 27819 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 09:33:35.955661 master-0 kubenswrapper[27819]: I0319 09:33:35.955603 27819 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 19 09:33:35.959387 master-0 kubenswrapper[27819]: I0319 09:33:35.959346 27819 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 19 09:33:35.959461 master-0 kubenswrapper[27819]: I0319 09:33:35.959400 27819 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 19 09:33:35.959461 master-0 kubenswrapper[27819]: I0319 09:33:35.959425 27819 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 19 09:33:35.959560 master-0 kubenswrapper[27819]: I0319 09:33:35.959512 27819 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 19 09:33:35.971235 master-0 kubenswrapper[27819]: I0319 09:33:35.969672 27819 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 19 09:33:35.971235 master-0 kubenswrapper[27819]: I0319 09:33:35.969743 27819 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 19 09:33:35.980219 master-0 kubenswrapper[27819]: I0319 09:33:35.980157 27819 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 09:33:35.980481 master-0 kubenswrapper[27819]: I0319 09:33:35.980458 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:35.980694 master-0 kubenswrapper[27819]: I0319 09:33:35.980661 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:35.981051 master-0 kubenswrapper[27819]: I0319 09:33:35.981009 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e68a93a5e0eb978126226b3b3f9b90c706e1a1f588f63ea47aa67b19c47bdf" Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981070 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6963dcf09d5e0149d22c475eaec1b2f2f1ef0b1db34c37a53fb3c83a0bb650" Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981082 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981134 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"d80d252371a2ad5ed4b58e02a5d1901d38af1954f4eb24253a522a6a46821598"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981184 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa6f8cb5d8c6bf0298daad9cbc84db09fdcf39078ac76e6417bed28402a86c24" Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981223 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"9df04d5fbcf74c680b5a31ee14b15b95259c81da87f4ed60f22768d81cdac068"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981232 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"c94e09f54a9fee6499356aa41d60a008e5c94d53f299d70367f4b907a1410644"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981242 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"b213f6d8da0d4384e45f89c17fb5962fd352a3cea0a7f3f8261c476ba746dbca"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981255 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"c9c5c2555104a5e10c5310ddcc3b28b08a5313436e0c1eca0038c9160b7826e8"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981266 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9e18fc195cbf5fb27f76f640c42d213bffb004a73cf242e7c9e02beeff1062a" Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981311 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c2da2395722223505ad6defea9773e10f1c32ee7ec1b621432372d72816ee7" Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981353 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e200533cb27ec948dd916b5ca2a3d1deddecf9ca980b5eb9bf633e27ae8bf611" Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981366 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"756374cfad040ab2f111ee5526fff718384e34314b3022f03afd3502143ed50c"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981377 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981386 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981396 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981405 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981414 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244"} Mar 19 09:33:35.981406 master-0 kubenswrapper[27819]: I0319 09:33:35.981425 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"cf0dec40bfcc5fadaa23a21d4cf69601f48d610c22d1aa21ed38658c7975c257"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981437 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2904eb335d23e11e23721447bebed6e83898b398c508def8b073f85f1f0f7e4" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981467 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41308b73b0cd59e2ebd3a9e2ccbd13c59e32ef712883338ba6a663fd6955d3dc" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981512 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a67637c3ce9588f542e20565aba89d6f1d4976553a42d7b1a45d6451a0663219" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981531 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b815bb5a4f3237642901cf478d08543a7c45d3f20aa5aa587a69d0647d632b8" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981578 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b4484c29bf71462f8aa83a2438a018a65a72efc3ab1ad01ecc3b27224d1c48" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981594 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2d499fdc3d3fa2bc3d6bd17fe41bec26683d20fa2510fec111d840f7bf16b36" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981633 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8911f4949ad2b1026cf67388b4c856ca207ee327d335f0a0ffbddeb06f138626" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981655 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"e4c5ba739335e2b30a9fc97ef2c426fd0d64a733b74b4eee96d946d003152a68"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981669 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"07ff589d07aa06788418e2b7ce676ec4971687ca5a285dd896ddf4c4eded2fba"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981677 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"c5c62153adf3a271102f4d9d5640d2d1802d2bb90e84f132621e7b506077bc80"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981687 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"9bf2b58b9edb6985d4157d1a669cb411e7d87e4a40043d1cb4839e8d5c366a20"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981695 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"5c20104ce7a41bea06c76dc88ee244675c179a1e54d702272138143050d4f7e0"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981704 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"f17b7ff6194fded74c25ab24964fb6d46dcd1d8e29da6ff5d4563dab4dd944c9"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981715 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"88aa2b34d394b9b72033dd41d87e96aa90f1022306b9040706a5972685dd778d"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981726 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"0861ab2fae00b28361f12c7b94fd6d71acf9a50d9f9e835730f83b9c6daaad52"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981735 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"7432a082c2253d23b865426cbd0b7c6fc641fd734bb3b6088975045dd1832638"} Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981756 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4944acb6dda035dde270308345019acdc87bd2a81d8b65e1c0a2845a63c510d" Mar 19 09:33:35.983978 master-0 kubenswrapper[27819]: I0319 09:33:35.981814 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6f9f00de289e7567b86de252b9a6b1c229d174535eede65fb885a8a83fa2393" Mar 19 09:33:35.993321 master-0 kubenswrapper[27819]: E0319 09:33:35.993276 27819 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:33:35.996417 master-0 kubenswrapper[27819]: E0319 09:33:35.996386 27819 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.074192 master-0 kubenswrapper[27819]: I0319 09:33:36.074131 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.074192 master-0 kubenswrapper[27819]: I0319 09:33:36.074178 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.074192 master-0 kubenswrapper[27819]: I0319 09:33:36.074196 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074216 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074232 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074248 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074262 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074276 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074291 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074306 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074319 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074332 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074345 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074361 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074373 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074386 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074399 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074413 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074427 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.074475 master-0 kubenswrapper[27819]: I0319 09:33:36.074441 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.092617 master-0 kubenswrapper[27819]: E0319 09:33:36.091194 27819 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.092617 master-0 kubenswrapper[27819]: E0319 09:33:36.091678 27819 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.175422 master-0 kubenswrapper[27819]: I0319 09:33:36.175337 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.175422 master-0 kubenswrapper[27819]: I0319 09:33:36.175397 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175464 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175599 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175622 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175694 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175704 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175734 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175757 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175810 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175853 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175878 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175875 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175897 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175925 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175953 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175978 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.175997 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176012 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176027 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176042 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176057 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176074 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176082 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176098 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.176154 master-0 kubenswrapper[27819]: I0319 09:33:36.176148 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176203 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176248 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176288 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176330 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176374 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176390 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176400 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176422 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176428 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176445 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176453 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176451 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176492 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.177153 master-0 kubenswrapper[27819]: I0319 09:33:36.176507 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.210474 master-0 kubenswrapper[27819]: I0319 09:33:36.210406 27819 apiserver.go:52] "Watching apiserver" Mar 19 09:33:36.231639 master-0 kubenswrapper[27819]: I0319 09:33:36.231572 27819 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 09:33:36.237581 master-0 kubenswrapper[27819]: I0319 09:33:36.233319 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr","openshift-multus/multus-additional-cni-plugins-jzj4h","openshift-service-ca/service-ca-79bc6b8d76-4lbsc","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj","openshift-kube-scheduler/installer-4-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc","openshift-marketplace/redhat-operators-7cczg","openshift-kube-controller-manager/installer-4-master-0","openshift-monitoring/node-exporter-k6kn8","openshift-network-operator/iptables-alerter-p9bbz","openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w","openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8","openshift-kube-storage-version-migrator/migrator-8487694857-nsnds","openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl","openshift-machine-config-operator/machine-config-server-nsnqt","openshift-monitoring/metrics-server-7c64897fc5-qj6vj","openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb","openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72","openshift-marketplace/community-operators-887wl","openshift-multus/multus-admission-controller-58c9f8fc64-69cqn","openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj","openshift-kube-controller-manager/installer-5-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-multus/network-metrics-daemon-lflg7","openshift-etcd/installer-2-master-0","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl","openshift-marketplace/certified-operators-l26xf","openshift-monitoring/kube-state-metrics-7bbc969446-d46h5","openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv","openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-apiserver/installer-4-master-0","openshift-machine-config-operator/machine-config-daemon-rw7tg","openshift-marketplace/marketplace-operator-89ccd998f-stct6","openshift-kube-scheduler/installer-5-master-0","openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5","openshift-dns/node-resolver-mf78p","openshift-ingress-canary/ingress-canary-6r9c4","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-etcd/etcd-master-0","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw","openshift-network-node-identity/network-node-identity-t7zwh","assisted-installer/assisted-installer-controller-kw4xv","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq","openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h","openshift-cluster-node-tuning-operator/tuned-5r5sh","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x","openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/installer-1-master-0","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd","openshift-apiserver/apiserver-6f6b54748-s5cpx","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5","openshift-kube-apiserver/installer-1-master-0","openshift-kube-apiserver/installer-3-master-0","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr","openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k","openshift-ovn-kubernetes/ovnkube-node-zmrpw","openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk","openshift-etcd/installer-1-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/redhat-marketplace-brpbp","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt","openshift-dns-operator/dns-operator-9c5679d8f-k89rz","openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd","openshift-insights/insights-operator-68bf6ff9d6-h4zrl","openshift-network-diagnostics/network-check-target-lql9l","openshift-network-operator/network-operator-7bd846bfc4-gkvf5","openshift-dns/dns-default-9xr8p","openshift-monitoring/telemeter-client-8699f95c5b-7w9vq","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b","openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns","openshift-oauth-apiserver/apiserver-775788bf78-tgdnw","openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh","openshift-ingress/router-default-7dcf5569b5-k99cg","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp","openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb","openshift-multus/multus-8pt59"] Mar 19 09:33:36.237581 master-0 kubenswrapper[27819]: I0319 09:33:36.233614 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-kw4xv" Mar 19 09:33:36.237581 master-0 kubenswrapper[27819]: I0319 09:33:36.237196 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 09:33:36.237581 master-0 kubenswrapper[27819]: I0319 09:33:36.237451 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 09:33:36.238090 master-0 kubenswrapper[27819]: I0319 09:33:36.237982 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 09:33:36.238298 master-0 kubenswrapper[27819]: I0319 09:33:36.238201 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 09:33:36.239576 master-0 kubenswrapper[27819]: I0319 09:33:36.238647 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 09:33:36.239576 master-0 kubenswrapper[27819]: I0319 09:33:36.238838 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 09:33:36.239576 master-0 kubenswrapper[27819]: I0319 09:33:36.239273 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 09:33:36.239576 master-0 kubenswrapper[27819]: I0319 09:33:36.239420 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 09:33:36.239794 master-0 kubenswrapper[27819]: I0319 09:33:36.239662 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 09:33:36.240476 master-0 kubenswrapper[27819]: I0319 09:33:36.240430 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 09:33:36.240593 master-0 kubenswrapper[27819]: I0319 09:33:36.240482 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.240593 master-0 kubenswrapper[27819]: I0319 09:33:36.240557 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 09:33:36.240818 master-0 kubenswrapper[27819]: I0319 09:33:36.240762 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.240885 master-0 kubenswrapper[27819]: I0319 09:33:36.240855 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.244578 master-0 kubenswrapper[27819]: I0319 09:33:36.240956 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.244578 master-0 kubenswrapper[27819]: I0319 09:33:36.241749 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 09:33:36.244578 master-0 kubenswrapper[27819]: I0319 09:33:36.242930 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 09:33:36.245966 master-0 kubenswrapper[27819]: I0319 09:33:36.245910 27819 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="34655e22-f69d-4875-b45b-5a476777e894" Mar 19 09:33:36.254010 master-0 kubenswrapper[27819]: I0319 09:33:36.253945 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 09:33:36.254708 master-0 kubenswrapper[27819]: I0319 09:33:36.254685 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:33:36.254937 master-0 kubenswrapper[27819]: I0319 09:33:36.254875 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 09:33:36.255455 master-0 kubenswrapper[27819]: I0319 09:33:36.255385 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 09:33:36.255840 master-0 kubenswrapper[27819]: I0319 09:33:36.255800 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 09:33:36.256109 master-0 kubenswrapper[27819]: I0319 09:33:36.256063 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 09:33:36.256109 master-0 kubenswrapper[27819]: I0319 09:33:36.256092 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 09:33:36.256268 master-0 kubenswrapper[27819]: I0319 09:33:36.256118 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 09:33:36.256403 master-0 kubenswrapper[27819]: I0319 09:33:36.256342 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 09:33:36.262036 master-0 kubenswrapper[27819]: I0319 09:33:36.260852 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 09:33:36.262423 master-0 kubenswrapper[27819]: I0319 09:33:36.262391 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 09:33:36.264046 master-0 kubenswrapper[27819]: I0319 09:33:36.264010 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.264175 master-0 kubenswrapper[27819]: I0319 09:33:36.264120 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.264959 master-0 kubenswrapper[27819]: I0319 09:33:36.264917 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 09:33:36.265112 master-0 kubenswrapper[27819]: I0319 09:33:36.265041 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 09:33:36.267358 master-0 kubenswrapper[27819]: I0319 09:33:36.267323 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.267449 master-0 kubenswrapper[27819]: I0319 09:33:36.267349 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.267449 master-0 kubenswrapper[27819]: I0319 09:33:36.267364 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 09:33:36.267798 master-0 kubenswrapper[27819]: I0319 09:33:36.267693 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 09:33:36.268033 master-0 kubenswrapper[27819]: I0319 09:33:36.267969 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 09:33:36.268033 master-0 kubenswrapper[27819]: I0319 09:33:36.267976 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 09:33:36.268129 master-0 kubenswrapper[27819]: I0319 09:33:36.268011 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.268200 master-0 kubenswrapper[27819]: I0319 09:33:36.268155 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.268272 master-0 kubenswrapper[27819]: I0319 09:33:36.268251 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 09:33:36.268820 master-0 kubenswrapper[27819]: I0319 09:33:36.268591 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.268820 master-0 kubenswrapper[27819]: I0319 09:33:36.268743 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 09:33:36.268820 master-0 kubenswrapper[27819]: I0319 09:33:36.268753 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 09:33:36.268994 master-0 kubenswrapper[27819]: I0319 09:33:36.268847 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 09:33:36.268994 master-0 kubenswrapper[27819]: I0319 09:33:36.268979 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 09:33:36.269264 master-0 kubenswrapper[27819]: I0319 09:33:36.269233 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 09:33:36.269403 master-0 kubenswrapper[27819]: I0319 09:33:36.269380 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.269534 master-0 kubenswrapper[27819]: I0319 09:33:36.269515 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.269822 master-0 kubenswrapper[27819]: I0319 09:33:36.269778 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 09:33:36.269897 master-0 kubenswrapper[27819]: I0319 09:33:36.269857 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 09:33:36.270664 master-0 kubenswrapper[27819]: I0319 09:33:36.270633 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 09:33:36.270777 master-0 kubenswrapper[27819]: I0319 09:33:36.270750 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 09:33:36.272834 master-0 kubenswrapper[27819]: I0319 09:33:36.272811 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 09:33:36.273728 master-0 kubenswrapper[27819]: I0319 09:33:36.273260 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 09:33:36.273728 master-0 kubenswrapper[27819]: I0319 09:33:36.273443 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 09:33:36.273728 master-0 kubenswrapper[27819]: I0319 09:33:36.273498 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 09:33:36.273728 master-0 kubenswrapper[27819]: I0319 09:33:36.273588 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:33:36.273728 master-0 kubenswrapper[27819]: I0319 09:33:36.273619 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 09:33:36.273728 master-0 kubenswrapper[27819]: I0319 09:33:36.273679 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 09:33:36.273728 master-0 kubenswrapper[27819]: I0319 09:33:36.273755 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 09:33:36.274196 master-0 kubenswrapper[27819]: I0319 09:33:36.273935 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 09:33:36.274196 master-0 kubenswrapper[27819]: I0319 09:33:36.274026 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 09:33:36.274196 master-0 kubenswrapper[27819]: I0319 09:33:36.274138 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 09:33:36.274434 master-0 kubenswrapper[27819]: I0319 09:33:36.274333 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 09:33:36.274434 master-0 kubenswrapper[27819]: I0319 09:33:36.274410 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 09:33:36.274565 master-0 kubenswrapper[27819]: I0319 09:33:36.274488 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 09:33:36.274645 master-0 kubenswrapper[27819]: I0319 09:33:36.274572 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 09:33:36.274645 master-0 kubenswrapper[27819]: I0319 09:33:36.274633 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 09:33:36.275389 master-0 kubenswrapper[27819]: I0319 09:33:36.274792 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 09:33:36.275389 master-0 kubenswrapper[27819]: I0319 09:33:36.274807 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 09:33:36.275389 master-0 kubenswrapper[27819]: I0319 09:33:36.274927 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 09:33:36.275389 master-0 kubenswrapper[27819]: I0319 09:33:36.275009 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 09:33:36.275559 master-0 kubenswrapper[27819]: I0319 09:33:36.275423 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 09:33:36.275664 master-0 kubenswrapper[27819]: I0319 09:33:36.275638 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 09:33:36.276267 master-0 kubenswrapper[27819]: I0319 09:33:36.276205 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 09:33:36.276427 master-0 kubenswrapper[27819]: I0319 09:33:36.276394 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:33:36.276576 master-0 kubenswrapper[27819]: I0319 09:33:36.276524 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:33:36.276746 master-0 kubenswrapper[27819]: I0319 09:33:36.276681 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 19 09:33:36.278214 master-0 kubenswrapper[27819]: I0319 09:33:36.278003 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 09:33:36.278214 master-0 kubenswrapper[27819]: I0319 09:33:36.278124 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.280240 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.280522 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.280746 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.280932 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281120 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281139 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281730 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281767 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281807 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281832 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbktm\" (UniqueName: \"kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm\") pod \"csi-snapshot-controller-operator-5f5d689c6b-d89zz\" (UID: \"43fca1a4-4fa7-4a43-b9c4-7f50a8737643\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281857 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281893 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281914 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8bm4\" (UniqueName: \"kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281933 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281952 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt99t\" (UniqueName: \"kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281971 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hccqk\" (UniqueName: \"kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.281990 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282008 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282025 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnxk9\" (UniqueName: \"kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282043 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282058 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282075 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282093 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282112 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282134 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7thvr\" (UniqueName: \"kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282162 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282179 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282197 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr4bl\" (UniqueName: \"kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282213 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282230 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w5fk\" (UniqueName: \"kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282247 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282265 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282301 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282320 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282351 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282368 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282387 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7rj\" (UniqueName: \"kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282406 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282428 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282470 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282514 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282626 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282643 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282662 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c654s\" (UniqueName: \"kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282681 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282698 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282716 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282740 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282767 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282784 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282806 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282825 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282846 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282889 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282920 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282968 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.282996 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpcnv\" (UniqueName: \"kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283051 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283077 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283136 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283164 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283218 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283245 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283300 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283326 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.283332 master-0 kubenswrapper[27819]: I0319 09:33:36.283388 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283420 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283536 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283599 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283649 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283684 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283738 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283766 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283828 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4n26\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283857 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283913 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283939 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283968 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8cg\" (UniqueName: \"kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.283993 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284021 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284046 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284066 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbvbr\" (UniqueName: \"kubernetes.io/projected/b42aee2f-bffc-4c43-bf20-16d9c67d216c-kube-api-access-lbvbr\") pod \"network-check-source-b4bf74f6-tk6ns\" (UID: \"b42aee2f-bffc-4c43-bf20-16d9c67d216c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284082 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284101 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284120 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnjq\" (UniqueName: \"kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284137 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284156 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85vjd\" (UniqueName: \"kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284174 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvtc\" (UniqueName: \"kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284192 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284208 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284228 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbzvl\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284248 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll8k\" (UniqueName: \"kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284266 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284286 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284308 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284327 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284346 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284366 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbw6q\" (UniqueName: \"kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284387 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284405 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284424 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284444 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284463 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284491 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284515 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284535 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284581 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284606 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284626 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284646 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284672 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284699 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284739 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284768 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284821 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284844 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284873 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h925l\" (UniqueName: \"kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284899 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmdk\" (UniqueName: \"kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284924 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284952 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.284977 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285002 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285026 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285053 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47plx\" (UniqueName: \"kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285077 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2svkc\" (UniqueName: \"kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285098 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8b7s\" (UniqueName: \"kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285116 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjhk\" (UniqueName: \"kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285138 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2hfh\" (UniqueName: \"kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285155 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285179 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285198 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285216 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285233 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285249 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285288 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285309 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285366 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285388 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285406 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285423 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285441 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285460 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdvd\" (UniqueName: \"kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285478 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnp7\" (UniqueName: \"kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285503 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285521 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285565 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfnn\" (UniqueName: \"kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285593 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t58zw\" (UniqueName: \"kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285617 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285636 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285659 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.285680 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.286136 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e25a16f3-dfe0-49c5-a31d-e310d369f406-srv-cert\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.286380 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/676f4062-ea34-48d0-80d7-3cd3d9da341e-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.286726 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.286974 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/70e8c62b-97c3-4c0c-85d3-f660118831fd-snapshots\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.286853 master-0 kubenswrapper[27819]: I0319 09:33:36.287015 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.287317 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-config\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.287350 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-cni-binary-copy\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.287254 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe1881fb-c670-442a-a092-c1eee6b7d5e5-serving-cert\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.288129 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.289613 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.289986 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70e8c62b-97c3-4c0c-85d3-f660118831fd-serving-cert\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.290708 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.282971 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.285309 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.292047 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.292229 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.292276 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.292334 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.285378 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.285446 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.292781 27819 scope.go:117] "RemoveContainer" containerID="756374cfad040ab2f111ee5526fff718384e34314b3022f03afd3502143ed50c" Mar 19 09:33:36.292883 master-0 kubenswrapper[27819]: I0319 09:33:36.285566 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 09:33:36.294043 master-0 kubenswrapper[27819]: I0319 09:33:36.286056 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 09:33:36.294043 master-0 kubenswrapper[27819]: I0319 09:33:36.286178 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 09:33:36.294043 master-0 kubenswrapper[27819]: I0319 09:33:36.286398 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 09:33:36.294043 master-0 kubenswrapper[27819]: I0319 09:33:36.286593 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 09:33:36.294043 master-0 kubenswrapper[27819]: I0319 09:33:36.286879 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 09:33:36.299007 master-0 kubenswrapper[27819]: I0319 09:33:36.294112 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 19 09:33:36.299928 master-0 kubenswrapper[27819]: I0319 09:33:36.299851 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 09:33:36.300249 master-0 kubenswrapper[27819]: I0319 09:33:36.300199 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff5aeea-f859-4e38-bf1c-9e730025c212-metrics-certs\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:33:36.302445 master-0 kubenswrapper[27819]: I0319 09:33:36.302375 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.304406 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-config\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.304476 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.304603 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70258988-8374-4aee-aaa2-be3c2e853062-config\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.304978 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.305109 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.305726 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70258988-8374-4aee-aaa2-be3c2e853062-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.306056 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-images\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.306304 master-0 kubenswrapper[27819]: I0319 09:33:36.306284 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:33:36.307373 master-0 kubenswrapper[27819]: I0319 09:33:36.306698 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d66c30b6-67ad-4864-8b51-0424d462ac98-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:36.307373 master-0 kubenswrapper[27819]: I0319 09:33:36.306951 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:36.307373 master-0 kubenswrapper[27819]: I0319 09:33:36.307007 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-config\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:33:36.307373 master-0 kubenswrapper[27819]: I0319 09:33:36.307251 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a67ae8dc-240d-4708-9139-1d49c601e552-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:33:36.308110 master-0 kubenswrapper[27819]: I0319 09:33:36.307421 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-binary-copy\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.308477 master-0 kubenswrapper[27819]: I0319 09:33:36.308444 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-env-overrides\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:36.309511 master-0 kubenswrapper[27819]: I0319 09:33:36.309081 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-client\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.309511 master-0 kubenswrapper[27819]: I0319 09:33:36.309239 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-images\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:36.309511 master-0 kubenswrapper[27819]: I0319 09:33:36.309463 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:33:36.309824 master-0 kubenswrapper[27819]: I0319 09:33:36.309643 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-trusted-ca\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:36.309824 master-0 kubenswrapper[27819]: I0319 09:33:36.309723 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c222998f-6211-4466-8ad7-5d9fcfb10789-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:36.309824 master-0 kubenswrapper[27819]: I0319 09:33:36.309812 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/17e0cb4a-e776-4886-927e-ae446af7f234-operand-assets\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:33:36.310159 master-0 kubenswrapper[27819]: I0319 09:33:36.310030 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-config\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:33:36.310159 master-0 kubenswrapper[27819]: I0319 09:33:36.309868 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-env-overrides\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.310305 master-0 kubenswrapper[27819]: I0319 09:33:36.310245 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/676f4062-ea34-48d0-80d7-3cd3d9da341e-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:33:36.310305 master-0 kubenswrapper[27819]: I0319 09:33:36.310294 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-config\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:33:36.310753 master-0 kubenswrapper[27819]: I0319 09:33:36.310721 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/211d123b-829c-49dd-b119-e172cab607cf-srv-cert\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:33:36.310863 master-0 kubenswrapper[27819]: I0319 09:33:36.310759 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a07456d-2e8e-4e80-a777-d0903ad21f07-config\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.310979 master-0 kubenswrapper[27819]: I0319 09:33:36.310951 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/525b41b5-82d8-4d47-8350-79644a2c9360-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:33:36.311357 master-0 kubenswrapper[27819]: I0319 09:33:36.311320 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-metrics-tls\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:33:36.311439 master-0 kubenswrapper[27819]: I0319 09:33:36.311400 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 09:33:36.311578 master-0 kubenswrapper[27819]: I0319 09:33:36.311407 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 09:33:36.311671 master-0 kubenswrapper[27819]: I0319 09:33:36.311638 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c222998f-6211-4466-8ad7-5d9fcfb10789-proxy-tls\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:36.312223 master-0 kubenswrapper[27819]: I0319 09:33:36.312152 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.312223 master-0 kubenswrapper[27819]: I0319 09:33:36.312184 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:36.312409 master-0 kubenswrapper[27819]: I0319 09:33:36.312215 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d66c30b6-67ad-4864-8b51-0424d462ac98-serving-cert\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:36.312409 master-0 kubenswrapper[27819]: I0319 09:33:36.312277 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0cb4a-e776-4886-927e-ae446af7f234-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:33:36.312497 master-0 kubenswrapper[27819]: I0319 09:33:36.312423 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/58fbf09a-3a26-45ab-8496-11d05c27e9cf-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:36.312497 master-0 kubenswrapper[27819]: I0319 09:33:36.312452 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:36.315202 master-0 kubenswrapper[27819]: I0319 09:33:36.315162 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 09:33:36.315476 master-0 kubenswrapper[27819]: I0319 09:33:36.315438 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-metrics-tls\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:36.315588 master-0 kubenswrapper[27819]: I0319 09:33:36.315563 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53bff8e4-bf60-4386-8905-49d43fd6c420-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:33:36.315659 master-0 kubenswrapper[27819]: I0319 09:33:36.315633 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.315741 master-0 kubenswrapper[27819]: I0319 09:33:36.315725 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-config\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.315863 master-0 kubenswrapper[27819]: I0319 09:33:36.315826 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 09:33:36.315929 master-0 kubenswrapper[27819]: I0319 09:33:36.315904 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:33:36.316012 master-0 kubenswrapper[27819]: I0319 09:33:36.315982 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6cd2eac-6412-4f38-8272-743c67b218a3-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:36.316172 master-0 kubenswrapper[27819]: I0319 09:33:36.316021 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-daemon-config\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.316172 master-0 kubenswrapper[27819]: I0319 09:33:36.316108 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53bff8e4-bf60-4386-8905-49d43fd6c420-config\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:33:36.316253 master-0 kubenswrapper[27819]: I0319 09:33:36.316192 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cert\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.316290 master-0 kubenswrapper[27819]: I0319 09:33:36.316268 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-serving-cert\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.316290 master-0 kubenswrapper[27819]: I0319 09:33:36.316271 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-serving-cert\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:33:36.316366 master-0 kubenswrapper[27819]: I0319 09:33:36.316302 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a07456d-2e8e-4e80-a777-d0903ad21f07-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:36.316459 master-0 kubenswrapper[27819]: I0319 09:33:36.316435 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-etcd-ca\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:36.316567 master-0 kubenswrapper[27819]: I0319 09:33:36.316532 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/45523224-f530-4354-90de-7fd65a1a3911-metrics-tls\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:33:36.316737 master-0 kubenswrapper[27819]: I0319 09:33:36.316717 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a67ae8dc-240d-4708-9139-1d49c601e552-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:33:36.316916 master-0 kubenswrapper[27819]: I0319 09:33:36.316884 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6cd2eac-6412-4f38-8272-743c67b218a3-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:36.317015 master-0 kubenswrapper[27819]: I0319 09:33:36.316974 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe1881fb-c670-442a-a092-c1eee6b7d5e5-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:36.317088 master-0 kubenswrapper[27819]: I0319 09:33:36.317070 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 09:33:36.320440 master-0 kubenswrapper[27819]: I0319 09:33:36.320411 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/60683578-6673-4aff-b1d5-3167d534ac08-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.322015 master-0 kubenswrapper[27819]: I0319 09:33:36.321987 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70e8c62b-97c3-4c0c-85d3-f660118831fd-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:36.336488 master-0 kubenswrapper[27819]: I0319 09:33:36.336439 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 09:33:36.352308 master-0 kubenswrapper[27819]: I0319 09:33:36.352253 27819 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 19 09:33:36.365861 master-0 kubenswrapper[27819]: I0319 09:33:36.365816 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 09:33:36.369296 master-0 kubenswrapper[27819]: I0319 09:33:36.368615 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.375691 master-0 kubenswrapper[27819]: I0319 09:33:36.375651 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 09:33:36.379850 master-0 kubenswrapper[27819]: I0319 09:33:36.379817 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386349 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386643 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-sys\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386695 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386722 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-fngzd\" (UID: \"0adaea87-67d0-41a7-a1f3-855fdd483aca\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386745 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-lib-modules\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386768 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386795 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386820 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386838 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386856 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386891 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqc86\" (UniqueName: \"kubernetes.io/projected/9d3fd276-2fe2-423a-b1ee-f27f1596d013-kube-api-access-cqc86\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386910 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386925 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386943 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jns5r\" (UniqueName: \"kubernetes.io/projected/3eeb72c3-1a56-4955-845e-81607513b1b2-kube-api-access-jns5r\") pod \"migrator-8487694857-nsnds\" (UID: \"3eeb72c3-1a56-4955-845e-81607513b1b2\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386969 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9tpx\" (UniqueName: \"kubernetes.io/projected/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-kube-api-access-s9tpx\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.386988 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4jnj\" (UniqueName: \"kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387011 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ded5da9a-1447-46df-a8ff-ffd469562599-kube-api-access\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387046 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387065 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-serving-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387085 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqts\" (UniqueName: \"kubernetes.io/projected/e0491730-604c-4a66-b827-458da88d262b-kube-api-access-gmqts\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387103 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387121 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a591384f-f83e-4f65-b5d0-d519f05edbd9-hosts-file\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387144 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-cabundle\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387162 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d58c6b38-ef11-465c-9fee-b83b84ce4669-cache\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387183 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flln7\" (UniqueName: \"kubernetes.io/projected/57227a66-c758-4a46-a5e1-f603baa3f570-kube-api-access-flln7\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387206 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387223 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387240 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w48g\" (UniqueName: \"kubernetes.io/projected/3f81774a-22a4-4335-961b-04e53e0f3b5e-kube-api-access-2w48g\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387256 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387279 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387303 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387325 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387381 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387399 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387420 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387443 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387469 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387490 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-catalog-content\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387518 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-rootfs\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387535 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-conf\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387572 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387745 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387773 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-systemd\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387796 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387805 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d58c6b38-ef11-465c-9fee-b83b84ce4669-cache\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387870 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-multus\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387931 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-cni-bin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387969 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388097 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388143 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-kubelet\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388159 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-os-release\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388196 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-etc-kubernetes\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388247 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-catalog-content\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388309 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-systemd-units\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388352 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-log-socket\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.387816 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-node-pullsecrets\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388458 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-kubernetes\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388484 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-root\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388498 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-cnibin\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388510 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388557 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svz6j\" (UniqueName: \"kubernetes.io/projected/1669b77c-4bef-42d5-ad0b-63c12a6677b2-kube-api-access-svz6j\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388580 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-wtmp\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388725 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-image-import-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388759 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388787 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit-dir\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388849 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388874 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-slash\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388938 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-utilities\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.388984 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-textfile\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389011 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389054 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-utilities\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389097 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-textfile\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389109 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389145 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389237 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389268 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389289 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389321 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgrjz\" (UniqueName: \"kubernetes.io/projected/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-kube-api-access-lgrjz\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389353 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389377 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389421 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8p7b\" (UniqueName: \"kubernetes.io/projected/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-kube-api-access-g8p7b\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389448 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389472 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-utilities\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389497 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-client\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389518 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-sys\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389558 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-tmp\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389564 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-host-etc-kube\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389585 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389613 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389641 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-utilities\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389653 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-ovn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389684 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389712 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389764 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389798 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389827 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-api-access-nfmmt\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389841 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-utilities\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389864 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389980 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-tmp\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.389962 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-tuned\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390024 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c3610f08-aba1-411d-aa6d-811b88acdb7b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390048 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390070 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxn9l\" (UniqueName: \"kubernetes.io/projected/72756f50-c970-4ef6-b8ca-88e49f996a74-kube-api-access-zxn9l\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390098 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-node-log\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390175 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390181 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8a7e077-3f6c-4efb-9865-cf82480c5da1-utilities\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390203 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56365780-b87d-43fc-95f5-8a44166aecf8-config-volume\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390229 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390293 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390334 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-systemd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390261 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-tuned\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390425 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390452 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390490 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390517 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.390591 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-system-cni-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.392132 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.392628 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.394816 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.394930 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.394963 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6t27\" (UniqueName: \"kubernetes.io/projected/561b7381-8439-4ccc-ac50-d7a50aeb0c55-kube-api-access-t6t27\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.394987 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395028 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbmx9\" (UniqueName: \"kubernetes.io/projected/a591384f-f83e-4f65-b5d0-d519f05edbd9-kube-api-access-vbmx9\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395055 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp5rd\" (UniqueName: \"kubernetes.io/projected/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-kube-api-access-rp5rd\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395086 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e5534b-f428-45cf-b54e-d06b25dc3e09-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395115 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395140 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395168 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rzx9\" (UniqueName: \"kubernetes.io/projected/56365780-b87d-43fc-95f5-8a44166aecf8-kube-api-access-5rzx9\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395193 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrmjf\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-kube-api-access-rrmjf\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395220 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5hmg\" (UniqueName: \"kubernetes.io/projected/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b-kube-api-access-k5hmg\") pod \"csi-snapshot-controller-64854d9cff-blgk8\" (UID: \"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395247 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssdjz\" (UniqueName: \"kubernetes.io/projected/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-kube-api-access-ssdjz\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395287 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395511 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395317 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mncvz\" (UniqueName: \"kubernetes.io/projected/e8a7e077-3f6c-4efb-9865-cf82480c5da1-kube-api-access-mncvz\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395681 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-dir\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.398649 master-0 kubenswrapper[27819]: I0319 09:33:36.395732 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-run-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396028 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-var-lib-kubelet\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396442 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67e5534b-f428-45cf-b54e-d06b25dc3e09-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396585 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtjps\" (UniqueName: \"kubernetes.io/projected/55440bf9-0881-4823-af64-5652c2ad89ff-kube-api-access-gtjps\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396622 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396648 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396672 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396712 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396743 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396772 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396798 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.396848 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-apiservice-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397657 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397655 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2ng\" (UniqueName: \"kubernetes.io/projected/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-kube-api-access-7g2ng\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397702 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-modprobe-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397723 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397742 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2vbp\" (UniqueName: \"kubernetes.io/projected/cd1425b9-fcd1-4aba-899f-e110eebce626-kube-api-access-s2vbp\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397759 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397792 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55440bf9-0881-4823-af64-5652c2ad89ff-tmpfs\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397866 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/55440bf9-0881-4823-af64-5652c2ad89ff-tmpfs\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.397993 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-netns\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398070 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-k8s-cni-cncf-io\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398098 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398100 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-conf-dir\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398116 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398154 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398168 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398210 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398213 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-cnibin\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398237 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-catalog-content\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398278 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398365 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398409 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398428 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72756f50-c970-4ef6-b8ca-88e49f996a74-catalog-content\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398374 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-mcd-auth-proxy-config\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398457 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398475 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-hostroot\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398485 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-var-lib-kubelet\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.398522 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400040 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-config\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400252 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-cache\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400288 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3883b232-5772-460f-9e94-b4cbc7b7e638-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400321 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400364 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400392 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400418 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-catalog-content\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400446 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdgvx\" (UniqueName: \"kubernetes.io/projected/c3610f08-aba1-411d-aa6d-811b88acdb7b-kube-api-access-jdgvx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400471 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400493 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400518 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400563 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400590 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400616 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npg9k\" (UniqueName: \"kubernetes.io/projected/dde1a2d9-a43e-4b26-82d7-e0f83577468f-kube-api-access-npg9k\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400653 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400679 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-key\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400702 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400726 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400750 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-catalog-content\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400773 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-encryption-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400800 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxz2j\" (UniqueName: \"kubernetes.io/projected/1b230b9d-529c-4b28-bc73-659a28d7961a-kube-api-access-mxz2j\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400844 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400871 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400894 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400925 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.400964 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtj5f\" (UniqueName: \"kubernetes.io/projected/4a73a5b0-478f-496d-8b0c-9e3daf39c082-kube-api-access-qtj5f\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401012 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfdkb\" (UniqueName: \"kubernetes.io/projected/14438c84-72d3-4f45-88a4-fc7e80df5fb8-kube-api-access-dfdkb\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401036 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401058 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401084 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401113 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401139 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-webhook-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401160 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401197 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401220 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401243 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401268 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-trusted-ca-bundle\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401289 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401334 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401361 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s45nc\" (UniqueName: \"kubernetes.io/projected/67e5534b-f428-45cf-b54e-d06b25dc3e09-kube-api-access-s45nc\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401454 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-catalog-content\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401464 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-cache\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401592 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3883b232-5772-460f-9e94-b4cbc7b7e638-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401663 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-netd\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401821 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.401993 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-ovn-kubernetes\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402111 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-utilities\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402138 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-host-run-multus-certs\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402158 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwbr\" (UniqueName: \"kubernetes.io/projected/d504cbc7-5c09-4712-9f7a-c41a6386ef79-kube-api-access-tmwbr\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402268 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-etc-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402335 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/60683578-6673-4aff-b1d5-3167d534ac08-system-cni-dir\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402445 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-utilities\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402507 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402588 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-host\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402624 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402638 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d504cbc7-5c09-4712-9f7a-c41a6386ef79-catalog-content\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402651 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6m8\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-kube-api-access-bs6m8\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402678 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9vh\" (UniqueName: \"kubernetes.io/projected/cef53432-93f5-4581-b3de-c8cc5cac2ecb-kube-api-access-sm9vh\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402700 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysconfig\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402723 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402744 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402786 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-var-lib-openvswitch\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402817 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402839 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402859 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9t7v\" (UniqueName: \"kubernetes.io/projected/fed75514-8f48-40b7-9fed-0afd6042cfbf-kube-api-access-h9t7v\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402877 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402903 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402927 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402966 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.402992 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403027 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403047 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nql4h\" (UniqueName: \"kubernetes.io/projected/6ed4ce2b-080f-4523-8527-eee768e06123-kube-api-access-nql4h\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403160 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-run-netns\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403186 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-os-release\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403201 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-host-cni-bin\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403282 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-run\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403300 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09cc190d-5647-40a1-bfe9-5355bcb33b10-multus-socket-dir-parent\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:36.414885 master-0 kubenswrapper[27819]: I0319 09:33:36.403315 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/672ad0aa-a0c5-4640-840d-3ffa02c55d62-host-slash\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:33:36.421830 master-0 kubenswrapper[27819]: I0319 09:33:36.417852 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 09:33:36.423494 master-0 kubenswrapper[27819]: I0319 09:33:36.423425 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovnkube-script-lib\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.435621 master-0 kubenswrapper[27819]: I0319 09:33:36.435560 27819 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.437379 master-0 kubenswrapper[27819]: I0319 09:33:36.437041 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 09:33:36.456270 master-0 kubenswrapper[27819]: I0319 09:33:36.455349 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 09:33:36.456438 master-0 kubenswrapper[27819]: W0319 09:33:36.456410 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3939b09ae7c21557b3dd5ab01349318.slice/crio-f39d5a946115d9aa2743e9655b3338b055f600620261ca0fa9e3a2d4b1e5b19b WatchSource:0}: Error finding container f39d5a946115d9aa2743e9655b3338b055f600620261ca0fa9e3a2d4b1e5b19b: Status 404 returned error can't find the container with id f39d5a946115d9aa2743e9655b3338b055f600620261ca0fa9e3a2d4b1e5b19b Mar 19 09:33:36.461078 master-0 kubenswrapper[27819]: I0319 09:33:36.460827 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-ovn-node-metrics-cert\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:36.475490 master-0 kubenswrapper[27819]: I0319 09:33:36.475457 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 09:33:36.477597 master-0 kubenswrapper[27819]: I0319 09:33:36.477564 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/47da8964-3606-4181-87fb-8f04a3065295-webhook-cert\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:36.496219 master-0 kubenswrapper[27819]: I0319 09:33:36.496176 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 09:33:36.497097 master-0 kubenswrapper[27819]: I0319 09:33:36.496520 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-ovnkube-identity-cm\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504214 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-var-lib-kubelet\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504376 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504444 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504593 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-host\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504630 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysconfig\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504677 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504710 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-run\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504737 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-sys\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504758 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-lib-modules\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504836 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a591384f-f83e-4f65-b5d0-d519f05edbd9-hosts-file\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.504990 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-rootfs\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505011 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505029 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-conf\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505070 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-systemd\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505095 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-node-pullsecrets\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505116 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-kubernetes\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505134 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-root\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505161 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-wtmp\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505184 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit-dir\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505276 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505300 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-sys\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505319 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505335 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505374 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505406 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c3610f08-aba1-411d-aa6d-811b88acdb7b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505498 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505645 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-kubernetes\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505664 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-var-lib-kubelet\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505724 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-node-pullsecrets\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505757 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-systemd\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505788 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-conf\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505817 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505839 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-root\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505862 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505875 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-wtmp\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505895 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysctl-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505897 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit-dir\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505914 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505932 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-host\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505954 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-sys\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505965 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-sysconfig\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.505993 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d58c6b38-ef11-465c-9fee-b83b84ce4669-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506000 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506042 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-sys\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506062 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506089 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/c3610f08-aba1-411d-aa6d-811b88acdb7b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506147 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-run\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506187 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-lib-modules\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506221 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a591384f-f83e-4f65-b5d0-d519f05edbd9-hosts-file\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506235 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506243 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-rootfs\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506264 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-dir\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506267 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506298 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-dir\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506319 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ded5da9a-1447-46df-a8ff-ffd469562599-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:36.506357 master-0 kubenswrapper[27819]: I0319 09:33:36.506386 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-modprobe-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.508553 master-0 kubenswrapper[27819]: I0319 09:33:36.506506 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/dde1a2d9-a43e-4b26-82d7-e0f83577468f-etc-modprobe-d\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:36.516673 master-0 kubenswrapper[27819]: I0319 09:33:36.516598 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 09:33:36.538860 master-0 kubenswrapper[27819]: I0319 09:33:36.535361 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 09:33:36.543098 master-0 kubenswrapper[27819]: I0319 09:33:36.542431 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47da8964-3606-4181-87fb-8f04a3065295-env-overrides\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:36.555441 master-0 kubenswrapper[27819]: I0319 09:33:36.555283 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 09:33:36.575697 master-0 kubenswrapper[27819]: I0319 09:33:36.575604 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 09:33:36.584479 master-0 kubenswrapper[27819]: I0319 09:33:36.584415 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"871862fc370783a007b93b0c8761c2c4198bb406a94fb799e983b9f5246e1c1c"} Mar 19 09:33:36.585904 master-0 kubenswrapper[27819]: I0319 09:33:36.585869 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerStarted","Data":"f39d5a946115d9aa2743e9655b3338b055f600620261ca0fa9e3a2d4b1e5b19b"} Mar 19 09:33:36.587930 master-0 kubenswrapper[27819]: I0319 09:33:36.587841 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-check-endpoints/0.log" Mar 19 09:33:36.590432 master-0 kubenswrapper[27819]: I0319 09:33:36.590179 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d"} Mar 19 09:33:36.590432 master-0 kubenswrapper[27819]: I0319 09:33:36.590300 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.595514 master-0 kubenswrapper[27819]: I0319 09:33:36.595490 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 09:33:36.596705 master-0 kubenswrapper[27819]: I0319 09:33:36.596682 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/672ad0aa-a0c5-4640-840d-3ffa02c55d62-iptables-alerter-script\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:33:36.606226 master-0 kubenswrapper[27819]: I0319 09:33:36.606100 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:36.616426 master-0 kubenswrapper[27819]: I0319 09:33:36.616390 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 09:33:36.636267 master-0 kubenswrapper[27819]: I0319 09:33:36.636039 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 09:33:36.647870 master-0 kubenswrapper[27819]: I0319 09:33:36.647802 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:36.656735 master-0 kubenswrapper[27819]: I0319 09:33:36.656705 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 19 09:33:36.684168 master-0 kubenswrapper[27819]: I0319 09:33:36.684117 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 19 09:33:36.697452 master-0 kubenswrapper[27819]: I0319 09:33:36.697410 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 19 09:33:36.707654 master-0 kubenswrapper[27819]: I0319 09:33:36.707611 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:36.709391 master-0 kubenswrapper[27819]: I0319 09:33:36.709358 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") pod \"98826625-8de0-4bf7-8926-ec62517369e5\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " Mar 19 09:33:36.709465 master-0 kubenswrapper[27819]: I0319 09:33:36.709438 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") pod \"98826625-8de0-4bf7-8926-ec62517369e5\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " Mar 19 09:33:36.709595 master-0 kubenswrapper[27819]: I0319 09:33:36.709564 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98826625-8de0-4bf7-8926-ec62517369e5" (UID: "98826625-8de0-4bf7-8926-ec62517369e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:36.709688 master-0 kubenswrapper[27819]: I0319 09:33:36.709658 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock" (OuterVolumeSpecName: "var-lock") pod "98826625-8de0-4bf7-8926-ec62517369e5" (UID: "98826625-8de0-4bf7-8926-ec62517369e5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:33:36.710890 master-0 kubenswrapper[27819]: I0319 09:33:36.710864 27819 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:36.710890 master-0 kubenswrapper[27819]: I0319 09:33:36.710886 27819 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98826625-8de0-4bf7-8926-ec62517369e5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:33:36.716176 master-0 kubenswrapper[27819]: I0319 09:33:36.716135 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 09:33:36.755086 master-0 kubenswrapper[27819]: I0319 09:33:36.752609 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 19 09:33:36.758211 master-0 kubenswrapper[27819]: I0319 09:33:36.757529 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 09:33:36.776993 master-0 kubenswrapper[27819]: I0319 09:33:36.776859 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 09:33:36.799013 master-0 kubenswrapper[27819]: I0319 09:33:36.796228 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 09:33:36.815360 master-0 kubenswrapper[27819]: I0319 09:33:36.815313 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 19 09:33:36.819237 master-0 kubenswrapper[27819]: I0319 09:33:36.819201 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d58c6b38-ef11-465c-9fee-b83b84ce4669-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.835835 master-0 kubenswrapper[27819]: I0319 09:33:36.835777 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 19 09:33:36.854630 master-0 kubenswrapper[27819]: I0319 09:33:36.854593 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:36.862570 master-0 kubenswrapper[27819]: I0319 09:33:36.856134 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 19 09:33:36.883030 master-0 kubenswrapper[27819]: I0319 09:33:36.882974 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 09:33:36.885040 master-0 kubenswrapper[27819]: I0319 09:33:36.885006 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-key\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:33:36.902568 master-0 kubenswrapper[27819]: I0319 09:33:36.899048 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 09:33:36.910778 master-0 kubenswrapper[27819]: I0319 09:33:36.908270 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fed75514-8f48-40b7-9fed-0afd6042cfbf-signing-cabundle\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:33:36.923995 master-0 kubenswrapper[27819]: I0319 09:33:36.923951 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 09:33:36.940000 master-0 kubenswrapper[27819]: I0319 09:33:36.939615 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 09:33:36.941266 master-0 kubenswrapper[27819]: I0319 09:33:36.941241 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-client\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.958924 master-0 kubenswrapper[27819]: I0319 09:33:36.958884 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 09:33:36.965772 master-0 kubenswrapper[27819]: I0319 09:33:36.965722 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-serving-cert\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.976510 master-0 kubenswrapper[27819]: I0319 09:33:36.976406 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 09:33:36.982819 master-0 kubenswrapper[27819]: I0319 09:33:36.982771 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1669b77c-4bef-42d5-ad0b-63c12a6677b2-encryption-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:36.996574 master-0 kubenswrapper[27819]: I0319 09:33:36.996496 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 09:33:36.998266 master-0 kubenswrapper[27819]: I0319 09:33:36.998205 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-config\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:37.020084 master-0 kubenswrapper[27819]: I0319 09:33:37.020031 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 09:33:37.023427 master-0 kubenswrapper[27819]: I0319 09:33:37.023389 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-audit\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:37.035881 master-0 kubenswrapper[27819]: I0319 09:33:37.035743 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 09:33:37.039898 master-0 kubenswrapper[27819]: I0319 09:33:37.039526 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-etcd-serving-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:37.055560 master-0 kubenswrapper[27819]: I0319 09:33:37.055510 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 09:33:37.062018 master-0 kubenswrapper[27819]: I0319 09:33:37.061983 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-image-import-ca\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:37.081561 master-0 kubenswrapper[27819]: I0319 09:33:37.081150 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 09:33:37.085556 master-0 kubenswrapper[27819]: I0319 09:33:37.083402 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1669b77c-4bef-42d5-ad0b-63c12a6677b2-trusted-ca-bundle\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:37.096627 master-0 kubenswrapper[27819]: I0319 09:33:37.096356 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 09:33:37.106562 master-0 kubenswrapper[27819]: I0319 09:33:37.106027 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:37.112559 master-0 kubenswrapper[27819]: I0319 09:33:37.110508 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:37.116905 master-0 kubenswrapper[27819]: I0319 09:33:37.116013 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 09:33:37.120560 master-0 kubenswrapper[27819]: I0319 09:33:37.118473 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-apiservice-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:37.124167 master-0 kubenswrapper[27819]: I0319 09:33:37.123895 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55440bf9-0881-4823-af64-5652c2ad89ff-webhook-cert\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:37.154564 master-0 kubenswrapper[27819]: I0319 09:33:37.153624 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6t9w\" (UniqueName: \"kubernetes.io/projected/6cc45721-c05b-4161-91d9-d65cf6ec61d4-kube-api-access-k6t9w\") pod \"network-check-target-lql9l\" (UID: \"6cc45721-c05b-4161-91d9-d65cf6ec61d4\") " pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:33:37.168616 master-0 kubenswrapper[27819]: I0319 09:33:37.168224 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbktm\" (UniqueName: \"kubernetes.io/projected/43fca1a4-4fa7-4a43-b9c4-7f50a8737643-kube-api-access-mbktm\") pod \"csi-snapshot-controller-operator-5f5d689c6b-d89zz\" (UID: \"43fca1a4-4fa7-4a43-b9c4-7f50a8737643\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-d89zz" Mar 19 09:33:37.187373 master-0 kubenswrapper[27819]: I0319 09:33:37.187323 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8bm4\" (UniqueName: \"kubernetes.io/projected/fe1881fb-c670-442a-a092-c1eee6b7d5e5-kube-api-access-r8bm4\") pod \"authentication-operator-5885bfd7f4-z9khh\" (UID: \"fe1881fb-c670-442a-a092-c1eee6b7d5e5\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-z9khh" Mar 19 09:33:37.217370 master-0 kubenswrapper[27819]: I0319 09:33:37.217338 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmdk\" (UniqueName: \"kubernetes.io/projected/60683578-6673-4aff-b1d5-3167d534ac08-kube-api-access-zcmdk\") pod \"multus-additional-cni-plugins-jzj4h\" (UID: \"60683578-6673-4aff-b1d5-3167d534ac08\") " pod="openshift-multus/multus-additional-cni-plugins-jzj4h" Mar 19 09:33:37.227036 master-0 kubenswrapper[27819]: I0319 09:33:37.226479 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt99t\" (UniqueName: \"kubernetes.io/projected/bff5aeea-f859-4e38-bf1c-9e730025c212-kube-api-access-dt99t\") pod \"network-metrics-daemon-lflg7\" (UID: \"bff5aeea-f859-4e38-bf1c-9e730025c212\") " pod="openshift-multus/network-metrics-daemon-lflg7" Mar 19 09:33:37.235469 master-0 kubenswrapper[27819]: I0319 09:33:37.235417 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 09:33:37.239104 master-0 kubenswrapper[27819]: I0319 09:33:37.239067 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.269300 master-0 kubenswrapper[27819]: I0319 09:33:37.268990 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47plx\" (UniqueName: \"kubernetes.io/projected/211d123b-829c-49dd-b119-e172cab607cf-kube-api-access-47plx\") pod \"catalog-operator-68f85b4d6c-tlmxr\" (UID: \"211d123b-829c-49dd-b119-e172cab607cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:33:37.287378 master-0 kubenswrapper[27819]: I0319 09:33:37.287251 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hccqk\" (UniqueName: \"kubernetes.io/projected/d66c30b6-67ad-4864-8b51-0424d462ac98-kube-api-access-hccqk\") pod \"openshift-config-operator-95bf4f4d-2k7c5\" (UID: \"d66c30b6-67ad-4864-8b51-0424d462ac98\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:37.288642 master-0 kubenswrapper[27819]: I0319 09:33:37.288608 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 19 09:33:37.293958 master-0 kubenswrapper[27819]: I0319 09:33:37.293925 27819 request.go:700] Waited for 1.006148681s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemeter-trusted-ca-bundle-8i12ta5c71j38&limit=500&resourceVersion=0 Mar 19 09:33:37.305937 master-0 kubenswrapper[27819]: I0319 09:33:37.305878 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 19 09:33:37.312588 master-0 kubenswrapper[27819]: I0319 09:33:37.311448 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-trusted-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.329929 master-0 kubenswrapper[27819]: I0319 09:33:37.329882 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2svkc\" (UniqueName: \"kubernetes.io/projected/e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc-kube-api-access-2svkc\") pod \"package-server-manager-7b95f86987-52j2b\" (UID: \"e21cdaa5-7963-46ba-9f58-2fcdbcf98cdc\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:33:37.335493 master-0 kubenswrapper[27819]: I0319 09:33:37.335448 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 09:33:37.341362 master-0 kubenswrapper[27819]: I0319 09:33:37.341325 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56365780-b87d-43fc-95f5-8a44166aecf8-config-volume\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:37.355351 master-0 kubenswrapper[27819]: I0319 09:33:37.355298 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dq4bt" Mar 19 09:33:37.380606 master-0 kubenswrapper[27819]: I0319 09:33:37.380248 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 09:33:37.388303 master-0 kubenswrapper[27819]: E0319 09:33:37.388180 27819 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388303 master-0 kubenswrapper[27819]: E0319 09:33:37.388208 27819 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388303 master-0 kubenswrapper[27819]: E0319 09:33:37.388224 27819 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388303 master-0 kubenswrapper[27819]: E0319 09:33:37.388216 27819 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388303 master-0 kubenswrapper[27819]: E0319 09:33:37.388284 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config podName:3883b232-5772-460f-9e94-b4cbc7b7e638 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888263569 +0000 UTC m=+2.809841261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-d46h5" (UID: "3883b232-5772-460f-9e94-b4cbc7b7e638") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388303 master-0 kubenswrapper[27819]: E0319 09:33:37.388304 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images podName:cd1425b9-fcd1-4aba-899f-e110eebce626 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.88829724 +0000 UTC m=+2.809874932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images") pod "machine-api-operator-6fbb6cf6f9-9jbdl" (UID: "cd1425b9-fcd1-4aba-899f-e110eebce626") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388184 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388348 27819 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388366 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert podName:9d3fd276-2fe2-423a-b1ee-f27f1596d013 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888341991 +0000 UTC m=+2.809919743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert") pod "ingress-canary-6r9c4" (UID: "9d3fd276-2fe2-423a-b1ee-f27f1596d013") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388375 27819 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388390 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle podName:57227a66-c758-4a46-a5e1-f603baa3f570 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888380642 +0000 UTC m=+2.809958434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle") pod "router-default-7dcf5569b5-k99cg" (UID: "57227a66-c758-4a46-a5e1-f603baa3f570") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388185 27819 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388408 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca podName:3f81774a-22a4-4335-961b-04e53e0f3b5e nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888400982 +0000 UTC m=+2.809978764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-tqnnm" (UID: "3f81774a-22a4-4335-961b-04e53e0f3b5e") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388424 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates podName:0adaea87-67d0-41a7-a1f3-855fdd483aca nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888417823 +0000 UTC m=+2.809995625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates") pod "prometheus-operator-admission-webhook-69c6b55594-fngzd" (UID: "0adaea87-67d0-41a7-a1f3-855fdd483aca") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388436 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs podName:e0491730-604c-4a66-b827-458da88d262b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888431383 +0000 UTC m=+2.810009075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs") pod "machine-config-server-nsnqt" (UID: "e0491730-604c-4a66-b827-458da88d262b") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388450 27819 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388450 27819 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388484 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert podName:561b7381-8439-4ccc-ac50-d7a50aeb0c55 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888465504 +0000 UTC m=+2.810043216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert") pod "apiserver-775788bf78-tgdnw" (UID: "561b7381-8439-4ccc-ac50-d7a50aeb0c55") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388511 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls podName:5ae3c935-4beb-4cc9-ba91-d82cac3148dd nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888499025 +0000 UTC m=+2.810076787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls") pod "metrics-server-7c64897fc5-qj6vj" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: E0319 09:33:37.388523 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config podName:c3610f08-aba1-411d-aa6d-811b88acdb7b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888518706 +0000 UTC m=+2.810096398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-sbgz2" (UID: "c3610f08-aba1-411d-aa6d-811b88acdb7b") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.388694 master-0 kubenswrapper[27819]: I0319 09:33:37.388663 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.388742 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.388797 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca podName:31e46a34-8a00-4bb3-869b-8a5911ef6cf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888777803 +0000 UTC m=+2.810355535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca") pod "node-exporter-k6kn8" (UID: "31e46a34-8a00-4bb3-869b-8a5911ef6cf8") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.388892 27819 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.388930 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token podName:e0491730-604c-4a66-b827-458da88d262b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888919416 +0000 UTC m=+2.810497198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token") pod "machine-config-server-nsnqt" (UID: "e0491730-604c-4a66-b827-458da88d262b") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.388935 27819 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.388977 27819 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.389004 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls podName:67e5534b-f428-45cf-b54e-d06b25dc3e09 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.888990018 +0000 UTC m=+2.810567760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls") pod "machine-config-controller-b4f87c5b9-k7nfp" (UID: "67e5534b-f428-45cf-b54e-d06b25dc3e09") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389225 master-0 kubenswrapper[27819]: E0319 09:33:37.389028 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies podName:561b7381-8439-4ccc-ac50-d7a50aeb0c55 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.889018199 +0000 UTC m=+2.810595981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies") pod "apiserver-775788bf78-tgdnw" (UID: "561b7381-8439-4ccc-ac50-d7a50aeb0c55") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.389496 master-0 kubenswrapper[27819]: E0319 09:33:37.389241 27819 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389496 master-0 kubenswrapper[27819]: E0319 09:33:37.389273 27819 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-4tu9qkfhfujlu: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389496 master-0 kubenswrapper[27819]: E0319 09:33:37.389289 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls podName:cef53432-93f5-4581-b3de-c8cc5cac2ecb nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.889279396 +0000 UTC m=+2.810857078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-chzwl" (UID: "cef53432-93f5-4581-b3de-c8cc5cac2ecb") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389496 master-0 kubenswrapper[27819]: E0319 09:33:37.389318 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle podName:5ae3c935-4beb-4cc9-ba91-d82cac3148dd nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.889304937 +0000 UTC m=+2.810882709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle") pod "metrics-server-7c64897fc5-qj6vj" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389496 master-0 kubenswrapper[27819]: E0319 09:33:37.389462 27819 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.389731 master-0 kubenswrapper[27819]: E0319 09:33:37.389513 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca podName:14438c84-72d3-4f45-88a4-fc7e80df5fb8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.889497972 +0000 UTC m=+2.811075654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-97lvq" (UID: "14438c84-72d3-4f45-88a4-fc7e80df5fb8") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.389731 master-0 kubenswrapper[27819]: E0319 09:33:37.389587 27819 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389731 master-0 kubenswrapper[27819]: E0319 09:33:37.389622 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config podName:561b7381-8439-4ccc-ac50-d7a50aeb0c55 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.889614035 +0000 UTC m=+2.811191727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config") pod "apiserver-775788bf78-tgdnw" (UID: "561b7381-8439-4ccc-ac50-d7a50aeb0c55") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389731 master-0 kubenswrapper[27819]: E0319 09:33:37.389631 27819 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389731 master-0 kubenswrapper[27819]: E0319 09:33:37.389674 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls podName:2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.889663507 +0000 UTC m=+2.811241269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls") pod "machine-config-daemon-rw7tg" (UID: "2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.389731 master-0 kubenswrapper[27819]: E0319 09:33:37.389723 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.389925 master-0 kubenswrapper[27819]: E0319 09:33:37.389767 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca podName:3883b232-5772-460f-9e94-b4cbc7b7e638 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.889758299 +0000 UTC m=+2.811336101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca") pod "kube-state-metrics-7bbc969446-d46h5" (UID: "3883b232-5772-460f-9e94-b4cbc7b7e638") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391005 27819 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391015 27819 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391047 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config podName:7825a2ac-eab6-4988-861a-9e3bfdf5dcc8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.891036515 +0000 UTC m=+2.812614267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-p4hvm" (UID: "7825a2ac-eab6-4988-861a-9e3bfdf5dcc8") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391062 27819 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391080 27819 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391068 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca podName:561b7381-8439-4ccc-ac50-d7a50aeb0c55 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.891059026 +0000 UTC m=+2.812636838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca") pod "apiserver-775788bf78-tgdnw" (UID: "561b7381-8439-4ccc-ac50-d7a50aeb0c55") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391109 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert podName:ded5da9a-1447-46df-a8ff-ffd469562599 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.891098167 +0000 UTC m=+2.812675949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert") pod "cluster-version-operator-7d58488df-dgqfl" (UID: "ded5da9a-1447-46df-a8ff-ffd469562599") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391111 27819 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391130 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle podName:561b7381-8439-4ccc-ac50-d7a50aeb0c55 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.891120578 +0000 UTC m=+2.812698340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle") pod "apiserver-775788bf78-tgdnw" (UID: "561b7381-8439-4ccc-ac50-d7a50aeb0c55") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391154 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls podName:cd1425b9-fcd1-4aba-899f-e110eebce626 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.891146788 +0000 UTC m=+2.812724580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-9jbdl" (UID: "cd1425b9-fcd1-4aba-899f-e110eebce626") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391157 27819 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391169 27819 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391186 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth podName:57227a66-c758-4a46-a5e1-f603baa3f570 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.891179339 +0000 UTC m=+2.812757121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth") pod "router-default-7dcf5569b5-k99cg" (UID: "57227a66-c758-4a46-a5e1-f603baa3f570") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.391575 master-0 kubenswrapper[27819]: E0319 09:33:37.391205 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls podName:14ee9a22-5b04-402c-98e9-35e2eb7cb2a2 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.89119648 +0000 UTC m=+2.812774252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls") pod "machine-approver-5c6485487f-ttn8h" (UID: "14ee9a22-5b04-402c-98e9-35e2eb7cb2a2") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.396032 master-0 kubenswrapper[27819]: I0319 09:33:37.395969 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 19 09:33:37.396123 master-0 kubenswrapper[27819]: E0319 09:33:37.396116 27819 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.396163 master-0 kubenswrapper[27819]: E0319 09:33:37.396152 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs podName:57227a66-c758-4a46-a5e1-f603baa3f570 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.896141227 +0000 UTC m=+2.817718919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs") pod "router-default-7dcf5569b5-k99cg" (UID: "57227a66-c758-4a46-a5e1-f603baa3f570") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398188 master-0 kubenswrapper[27819]: E0319 09:33:37.398146 27819 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398188 master-0 kubenswrapper[27819]: E0319 09:33:37.398170 27819 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398304 master-0 kubenswrapper[27819]: E0319 09:33:37.398208 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls podName:3f81774a-22a4-4335-961b-04e53e0f3b5e nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898198903 +0000 UTC m=+2.819776595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-tqnnm" (UID: "3f81774a-22a4-4335-961b-04e53e0f3b5e") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398304 master-0 kubenswrapper[27819]: E0319 09:33:37.398226 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls podName:c3610f08-aba1-411d-aa6d-811b88acdb7b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898216713 +0000 UTC m=+2.819794405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-sbgz2" (UID: "c3610f08-aba1-411d-aa6d-811b88acdb7b") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398304 master-0 kubenswrapper[27819]: E0319 09:33:37.398229 27819 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398304 master-0 kubenswrapper[27819]: E0319 09:33:37.398228 27819 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.398304 master-0 kubenswrapper[27819]: E0319 09:33:37.398275 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client podName:561b7381-8439-4ccc-ac50-d7a50aeb0c55 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898259314 +0000 UTC m=+2.819837076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client") pod "apiserver-775788bf78-tgdnw" (UID: "561b7381-8439-4ccc-ac50-d7a50aeb0c55") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398304 master-0 kubenswrapper[27819]: E0319 09:33:37.398307 27819 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398616 master-0 kubenswrapper[27819]: E0319 09:33:37.398330 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config podName:cd1425b9-fcd1-4aba-899f-e110eebce626 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898309846 +0000 UTC m=+2.819887598 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config") pod "machine-api-operator-6fbb6cf6f9-9jbdl" (UID: "cd1425b9-fcd1-4aba-899f-e110eebce626") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.398616 master-0 kubenswrapper[27819]: E0319 09:33:37.398354 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls podName:3883b232-5772-460f-9e94-b4cbc7b7e638 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898343847 +0000 UTC m=+2.819921639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-d46h5" (UID: "3883b232-5772-460f-9e94-b4cbc7b7e638") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398616 master-0 kubenswrapper[27819]: E0319 09:33:37.398383 27819 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398616 master-0 kubenswrapper[27819]: E0319 09:33:37.398420 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs podName:4a73a5b0-478f-496d-8b0c-9e3daf39c082 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898410709 +0000 UTC m=+2.819988491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs") pod "multus-admission-controller-58c9f8fc64-69cqn" (UID: "4a73a5b0-478f-496d-8b0c-9e3daf39c082") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.398616 master-0 kubenswrapper[27819]: E0319 09:33:37.398592 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.398776 master-0 kubenswrapper[27819]: E0319 09:33:37.398641 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca podName:1b230b9d-529c-4b28-bc73-659a28d7961a nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898629366 +0000 UTC m=+2.820207108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-84ztr" (UID: "1b230b9d-529c-4b28-bc73-659a28d7961a") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.398776 master-0 kubenswrapper[27819]: E0319 09:33:37.398649 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.398776 master-0 kubenswrapper[27819]: E0319 09:33:37.398727 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle podName:5ae3c935-4beb-4cc9-ba91-d82cac3148dd nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.898713498 +0000 UTC m=+2.820291270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7c64897fc5-qj6vj" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.401897 master-0 kubenswrapper[27819]: E0319 09:33:37.401414 27819 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.401897 master-0 kubenswrapper[27819]: E0319 09:33:37.401462 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate podName:57227a66-c758-4a46-a5e1-f603baa3f570 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.901452474 +0000 UTC m=+2.823030166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate") pod "router-default-7dcf5569b5-k99cg" (UID: "57227a66-c758-4a46-a5e1-f603baa3f570") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.402659 master-0 kubenswrapper[27819]: E0319 09:33:37.402615 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.402659 master-0 kubenswrapper[27819]: E0319 09:33:37.402641 27819 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402650 27819 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402621 27819 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402689 27819 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402621 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402672 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle podName:d80f71af-e3ff-4a9f-8c9c-883a6a5581d0 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.902660257 +0000 UTC m=+2.824238049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle") pod "telemeter-client-8699f95c5b-7w9vq" (UID: "d80f71af-e3ff-4a9f-8c9c-883a6a5581d0") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402738 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca podName:d80f71af-e3ff-4a9f-8c9c-883a6a5581d0 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.902729818 +0000 UTC m=+2.824307590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca") pod "telemeter-client-8699f95c5b-7w9vq" (UID: "d80f71af-e3ff-4a9f-8c9c-883a6a5581d0") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402754 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls podName:1b230b9d-529c-4b28-bc73-659a28d7961a nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.902747109 +0000 UTC m=+2.824324921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-84ztr" (UID: "1b230b9d-529c-4b28-bc73-659a28d7961a") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402768 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config podName:3f81774a-22a4-4335-961b-04e53e0f3b5e nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.902761389 +0000 UTC m=+2.824339201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-tqnnm" (UID: "3f81774a-22a4-4335-961b-04e53e0f3b5e") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.402779 master-0 kubenswrapper[27819]: E0319 09:33:37.402787 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls podName:56365780-b87d-43fc-95f5-8a44166aecf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.90277902 +0000 UTC m=+2.824356712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls") pod "dns-default-9xr8p" (UID: "56365780-b87d-43fc-95f5-8a44166aecf8") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.402802 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls podName:6ed4ce2b-080f-4523-8527-eee768e06123 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.90279503 +0000 UTC m=+2.824372832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-vxndj" (UID: "6ed4ce2b-080f-4523-8527-eee768e06123") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.402826 27819 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.402853 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client podName:d80f71af-e3ff-4a9f-8c9c-883a6a5581d0 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.902847692 +0000 UTC m=+2.824425374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client") pod "telemeter-client-8699f95c5b-7w9vq" (UID: "d80f71af-e3ff-4a9f-8c9c-883a6a5581d0") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.402882 27819 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.402906 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images podName:c3610f08-aba1-411d-aa6d-811b88acdb7b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.902898873 +0000 UTC m=+2.824476565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images") pod "cluster-cloud-controller-manager-operator-7dff898856-sbgz2" (UID: "c3610f08-aba1-411d-aa6d-811b88acdb7b") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: I0319 09:33:37.403070 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.403116 27819 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.403141 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config podName:31e46a34-8a00-4bb3-869b-8a5911ef6cf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.9031352 +0000 UTC m=+2.824712892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config") pod "node-exporter-k6kn8" (UID: "31e46a34-8a00-4bb3-869b-8a5911ef6cf8") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.403154 27819 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403167 master-0 kubenswrapper[27819]: E0319 09:33:37.403174 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls podName:31e46a34-8a00-4bb3-869b-8a5911ef6cf8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.90316868 +0000 UTC m=+2.824746372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls") pod "node-exporter-k6kn8" (UID: "31e46a34-8a00-4bb3-869b-8a5911ef6cf8") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403195 27819 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403218 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config podName:1b230b9d-529c-4b28-bc73-659a28d7961a nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.903213132 +0000 UTC m=+2.824790824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-84ztr" (UID: "1b230b9d-529c-4b28-bc73-659a28d7961a") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403241 27819 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403260 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca podName:ded5da9a-1447-46df-a8ff-ffd469562599 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.903254693 +0000 UTC m=+2.824832385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca") pod "cluster-version-operator-7d58488df-dgqfl" (UID: "ded5da9a-1447-46df-a8ff-ffd469562599") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403280 27819 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403299 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap podName:3883b232-5772-460f-9e94-b4cbc7b7e638 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.903294634 +0000 UTC m=+2.824872326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-d46h5" (UID: "3883b232-5772-460f-9e94-b4cbc7b7e638") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403311 27819 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.403574 master-0 kubenswrapper[27819]: E0319 09:33:37.403333 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert podName:7825a2ac-eab6-4988-861a-9e3bfdf5dcc8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.903327955 +0000 UTC m=+2.824905647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert") pod "cluster-autoscaler-operator-866dc4744-p4hvm" (UID: "7825a2ac-eab6-4988-861a-9e3bfdf5dcc8") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.403908 27819 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.403964 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config podName:14ee9a22-5b04-402c-98e9-35e2eb7cb2a2 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.903946082 +0000 UTC m=+2.825523774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config") pod "machine-approver-5c6485487f-ttn8h" (UID: "14ee9a22-5b04-402c-98e9-35e2eb7cb2a2") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.403984 27819 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.404007 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls podName:d80f71af-e3ff-4a9f-8c9c-883a6a5581d0 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.904001653 +0000 UTC m=+2.825579345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls") pod "telemeter-client-8699f95c5b-7w9vq" (UID: "d80f71af-e3ff-4a9f-8c9c-883a6a5581d0") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.404030 27819 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.404055 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config podName:14ee9a22-5b04-402c-98e9-35e2eb7cb2a2 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.904049285 +0000 UTC m=+2.825626977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config") pod "machine-approver-5c6485487f-ttn8h" (UID: "14ee9a22-5b04-402c-98e9-35e2eb7cb2a2") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.404067 27819 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.404090 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert podName:14438c84-72d3-4f45-88a4-fc7e80df5fb8 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.904085016 +0000 UTC m=+2.825662708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-97lvq" (UID: "14438c84-72d3-4f45-88a4-fc7e80df5fb8") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.404110 27819 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.405313 master-0 kubenswrapper[27819]: E0319 09:33:37.404130 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls podName:d80f71af-e3ff-4a9f-8c9c-883a6a5581d0 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:37.904124688 +0000 UTC m=+2.825702380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls") pod "telemeter-client-8699f95c5b-7w9vq" (UID: "d80f71af-e3ff-4a9f-8c9c-883a6a5581d0") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:37.416270 master-0 kubenswrapper[27819]: I0319 09:33:37.416227 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-2hrp4" Mar 19 09:33:37.437397 master-0 kubenswrapper[27819]: I0319 09:33:37.437344 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 19 09:33:37.455395 master-0 kubenswrapper[27819]: I0319 09:33:37.455343 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 09:33:37.485956 master-0 kubenswrapper[27819]: I0319 09:33:37.485912 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnjq\" (UniqueName: \"kubernetes.io/projected/c222998f-6211-4466-8ad7-5d9fcfb10789-kube-api-access-cjnjq\") pod \"machine-config-operator-84d549f6d5-4wv72\" (UID: \"c222998f-6211-4466-8ad7-5d9fcfb10789\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-4wv72" Mar 19 09:33:37.495213 master-0 kubenswrapper[27819]: I0319 09:33:37.495174 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 19 09:33:37.515306 master-0 kubenswrapper[27819]: I0319 09:33:37.515261 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 19 09:33:37.546862 master-0 kubenswrapper[27819]: I0319 09:33:37.546756 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:37.556065 master-0 kubenswrapper[27819]: I0319 09:33:37.556022 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4tu9qkfhfujlu" Mar 19 09:33:37.575806 master-0 kubenswrapper[27819]: I0319 09:33:37.575760 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 09:33:37.596250 master-0 kubenswrapper[27819]: I0319 09:33:37.596207 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 19 09:33:37.597524 master-0 kubenswrapper[27819]: I0319 09:33:37.597478 27819 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="6950dbd162496ff96ac22cf66872a0f41be7ebc9910fbd50974ed34a97c4be41" exitCode=0 Mar 19 09:33:37.597885 master-0 kubenswrapper[27819]: I0319 09:33:37.597857 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b030b36c-1fe0-4044-a53e-9b69b33e6833" Mar 19 09:33:37.597885 master-0 kubenswrapper[27819]: I0319 09:33:37.597882 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b030b36c-1fe0-4044-a53e-9b69b33e6833" Mar 19 09:33:37.599926 master-0 kubenswrapper[27819]: I0319 09:33:37.599898 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:37.600733 master-0 kubenswrapper[27819]: I0319 09:33:37.600700 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:37.600803 master-0 kubenswrapper[27819]: I0319 09:33:37.600781 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="cb5d42ba-5b76-4178-bbee-be1c30533bbb" Mar 19 09:33:37.600803 master-0 kubenswrapper[27819]: I0319 09:33:37.600799 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="cb5d42ba-5b76-4178-bbee-be1c30533bbb" Mar 19 09:33:37.615799 master-0 kubenswrapper[27819]: I0319 09:33:37.615754 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-xd9dt" Mar 19 09:33:37.652230 master-0 kubenswrapper[27819]: I0319 09:33:37.652165 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8b7s\" (UniqueName: \"kubernetes.io/projected/a57648b5-1a08-49a7-bedb-f7c1e54d92b4-kube-api-access-m8b7s\") pod \"cluster-node-tuning-operator-598fbc5f8f-8mpp9\" (UID: \"a57648b5-1a08-49a7-bedb-f7c1e54d92b4\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-8mpp9" Mar 19 09:33:37.655885 master-0 kubenswrapper[27819]: I0319 09:33:37.655837 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 09:33:37.687569 master-0 kubenswrapper[27819]: I0319 09:33:37.687498 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjhk\" (UniqueName: \"kubernetes.io/projected/58fbf09a-3a26-45ab-8496-11d05c27e9cf-kube-api-access-4xjhk\") pod \"marketplace-operator-89ccd998f-stct6\" (UID: \"58fbf09a-3a26-45ab-8496-11d05c27e9cf\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:37.696115 master-0 kubenswrapper[27819]: I0319 09:33:37.696082 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 19 09:33:37.715710 master-0 kubenswrapper[27819]: I0319 09:33:37.715658 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-vxsrn" Mar 19 09:33:37.757044 master-0 kubenswrapper[27819]: I0319 09:33:37.757004 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vkwb4" Mar 19 09:33:37.776776 master-0 kubenswrapper[27819]: I0319 09:33:37.776718 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 09:33:37.795395 master-0 kubenswrapper[27819]: I0319 09:33:37.795356 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 09:33:37.815793 master-0 kubenswrapper[27819]: I0319 09:33:37.815672 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 09:33:37.836218 master-0 kubenswrapper[27819]: I0319 09:33:37.836169 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 09:33:37.855699 master-0 kubenswrapper[27819]: I0319 09:33:37.855642 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 09:33:37.876292 master-0 kubenswrapper[27819]: I0319 09:33:37.876239 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 19 09:33:37.901042 master-0 kubenswrapper[27819]: I0319 09:33:37.900988 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 19 09:33:37.915371 master-0 kubenswrapper[27819]: I0319 09:33:37.915317 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 19 09:33:37.935404 master-0 kubenswrapper[27819]: I0319 09:33:37.935343 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-nf86g" Mar 19 09:33:37.935985 master-0 kubenswrapper[27819]: I0319 09:33:37.935946 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:37.936113 master-0 kubenswrapper[27819]: I0319 09:33:37.936084 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:37.936148 master-0 kubenswrapper[27819]: I0319 09:33:37.936133 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:33:37.936199 master-0 kubenswrapper[27819]: I0319 09:33:37.936163 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:37.936199 master-0 kubenswrapper[27819]: I0319 09:33:37.936184 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:37.936254 master-0 kubenswrapper[27819]: I0319 09:33:37.936215 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:37.936282 master-0 kubenswrapper[27819]: I0319 09:33:37.936253 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:37.936312 master-0 kubenswrapper[27819]: I0319 09:33:37.936290 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:37.936344 master-0 kubenswrapper[27819]: I0319 09:33:37.936316 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:37.936344 master-0 kubenswrapper[27819]: I0319 09:33:37.936330 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3f81774a-22a4-4335-961b-04e53e0f3b5e-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:37.936397 master-0 kubenswrapper[27819]: I0319 09:33:37.936337 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:37.936433 master-0 kubenswrapper[27819]: I0319 09:33:37.936401 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:37.936465 master-0 kubenswrapper[27819]: I0319 09:33:37.936433 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:37.936497 master-0 kubenswrapper[27819]: I0319 09:33:37.936462 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:33:37.936526 master-0 kubenswrapper[27819]: I0319 09:33:37.936495 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:37.936824 master-0 kubenswrapper[27819]: I0319 09:33:37.936743 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:37.936885 master-0 kubenswrapper[27819]: I0319 09:33:37.936850 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:37.936943 master-0 kubenswrapper[27819]: I0319 09:33:37.936928 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.936986 master-0 kubenswrapper[27819]: I0319 09:33:37.936754 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:37.937117 master-0 kubenswrapper[27819]: I0319 09:33:37.937079 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:37.937193 master-0 kubenswrapper[27819]: I0319 09:33:37.937128 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:37.937243 master-0 kubenswrapper[27819]: I0319 09:33:37.937212 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:37.937288 master-0 kubenswrapper[27819]: I0319 09:33:37.937263 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:37.937319 master-0 kubenswrapper[27819]: I0319 09:33:37.937287 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:37.937349 master-0 kubenswrapper[27819]: I0319 09:33:37.937323 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:37.937504 master-0 kubenswrapper[27819]: I0319 09:33:37.937467 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:37.937558 master-0 kubenswrapper[27819]: I0319 09:33:37.937486 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.937593 master-0 kubenswrapper[27819]: I0319 09:33:37.937531 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:37.937636 master-0 kubenswrapper[27819]: I0319 09:33:37.937616 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:37.937748 master-0 kubenswrapper[27819]: I0319 09:33:37.937709 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-machine-approver-tls\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:37.937781 master-0 kubenswrapper[27819]: I0319 09:33:37.937717 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:37.937854 master-0 kubenswrapper[27819]: I0319 09:33:37.937826 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:37.937901 master-0 kubenswrapper[27819]: I0319 09:33:37.937880 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:33:37.938001 master-0 kubenswrapper[27819]: I0319 09:33:37.937973 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:37.938042 master-0 kubenswrapper[27819]: I0319 09:33:37.938025 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:37.938124 master-0 kubenswrapper[27819]: I0319 09:33:37.938099 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:37.938213 master-0 kubenswrapper[27819]: I0319 09:33:37.938188 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:37.938347 master-0 kubenswrapper[27819]: I0319 09:33:37.938309 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.938385 master-0 kubenswrapper[27819]: I0319 09:33:37.938365 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.938477 master-0 kubenswrapper[27819]: I0319 09:33:37.938452 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.938516 master-0 kubenswrapper[27819]: I0319 09:33:37.938497 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-metrics-client-ca\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.938516 master-0 kubenswrapper[27819]: I0319 09:33:37.938502 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.938607 master-0 kubenswrapper[27819]: I0319 09:33:37.938583 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.938653 master-0 kubenswrapper[27819]: I0319 09:33:37.938632 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:37.938692 master-0 kubenswrapper[27819]: I0319 09:33:37.938664 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:37.938692 master-0 kubenswrapper[27819]: I0319 09:33:37.938686 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:37.938764 master-0 kubenswrapper[27819]: I0319 09:33:37.938728 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-secret-telemeter-client\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.938793 master-0 kubenswrapper[27819]: I0319 09:33:37.938768 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:37.938824 master-0 kubenswrapper[27819]: I0319 09:33:37.938810 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:37.938855 master-0 kubenswrapper[27819]: I0319 09:33:37.938841 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:33:37.938891 master-0 kubenswrapper[27819]: I0319 09:33:37.938873 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-auth-proxy-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:37.938920 master-0 kubenswrapper[27819]: I0319 09:33:37.938892 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:37.938920 master-0 kubenswrapper[27819]: I0319 09:33:37.938908 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/14438c84-72d3-4f45-88a4-fc7e80df5fb8-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:37.938977 master-0 kubenswrapper[27819]: I0319 09:33:37.938922 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-federate-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.938977 master-0 kubenswrapper[27819]: I0319 09:33:37.938929 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:37.939033 master-0 kubenswrapper[27819]: I0319 09:33:37.938996 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:37.939082 master-0 kubenswrapper[27819]: I0319 09:33:37.939052 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.939123 master-0 kubenswrapper[27819]: I0319 09:33:37.939066 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1b230b9d-529c-4b28-bc73-659a28d7961a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:37.939176 master-0 kubenswrapper[27819]: I0319 09:33:37.939154 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:37.939229 master-0 kubenswrapper[27819]: I0319 09:33:37.939191 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:37.939287 master-0 kubenswrapper[27819]: I0319 09:33:37.939269 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:37.939348 master-0 kubenswrapper[27819]: I0319 09:33:37.939318 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.939380 master-0 kubenswrapper[27819]: I0319 09:33:37.939350 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-serving-certs-ca-bundle\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.939409 master-0 kubenswrapper[27819]: I0319 09:33:37.939356 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:37.939484 master-0 kubenswrapper[27819]: I0319 09:33:37.939443 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:37.939581 master-0 kubenswrapper[27819]: I0319 09:33:37.939556 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-telemeter-client-tls\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:37.939581 master-0 kubenswrapper[27819]: I0319 09:33:37.939567 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:37.939663 master-0 kubenswrapper[27819]: I0319 09:33:37.939566 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-config\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:37.939663 master-0 kubenswrapper[27819]: I0319 09:33:37.939630 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-fngzd\" (UID: \"0adaea87-67d0-41a7-a1f3-855fdd483aca\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:33:37.939746 master-0 kubenswrapper[27819]: I0319 09:33:37.939672 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.939746 master-0 kubenswrapper[27819]: I0319 09:33:37.939712 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:37.939932 master-0 kubenswrapper[27819]: I0319 09:33:37.939910 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:37.940105 master-0 kubenswrapper[27819]: I0319 09:33:37.940078 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-metrics-client-ca\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:37.956198 master-0 kubenswrapper[27819]: I0319 09:33:37.956140 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-t4gft" Mar 19 09:33:37.976251 master-0 kubenswrapper[27819]: I0319 09:33:37.976188 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p2d2f" Mar 19 09:33:37.996234 master-0 kubenswrapper[27819]: I0319 09:33:37.996182 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 09:33:37.998656 master-0 kubenswrapper[27819]: I0319 09:33:37.998603 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4a73a5b0-478f-496d-8b0c-9e3daf39c082-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:33:38.016577 master-0 kubenswrapper[27819]: I0319 09:33:38.016497 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-cpq7s" Mar 19 09:33:38.036735 master-0 kubenswrapper[27819]: I0319 09:33:38.036636 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 19 09:33:38.040471 master-0 kubenswrapper[27819]: I0319 09:33:38.040423 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-cert\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:38.068601 master-0 kubenswrapper[27819]: I0319 09:33:38.068452 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-sx4vp" Mar 19 09:33:38.076102 master-0 kubenswrapper[27819]: I0319 09:33:38.076050 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 19 09:33:38.078309 master-0 kubenswrapper[27819]: I0319 09:33:38.078272 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:38.095975 master-0 kubenswrapper[27819]: I0319 09:33:38.095926 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 09:33:38.099567 master-0 kubenswrapper[27819]: I0319 09:33:38.099512 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-tls\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:38.115885 master-0 kubenswrapper[27819]: I0319 09:33:38.115843 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 09:33:38.118476 master-0 kubenswrapper[27819]: I0319 09:33:38.118439 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-stats-auth\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:38.136099 master-0 kubenswrapper[27819]: I0319 09:33:38.136053 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 09:33:38.140414 master-0 kubenswrapper[27819]: I0319 09:33:38.140380 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:38.155773 master-0 kubenswrapper[27819]: I0319 09:33:38.155738 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-cn6b2" Mar 19 09:33:38.175898 master-0 kubenswrapper[27819]: I0319 09:33:38.175846 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 09:33:38.179560 master-0 kubenswrapper[27819]: I0319 09:33:38.179515 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-default-certificate\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:38.195561 master-0 kubenswrapper[27819]: I0319 09:33:38.195506 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 09:33:38.216081 master-0 kubenswrapper[27819]: I0319 09:33:38.216050 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 09:33:38.217717 master-0 kubenswrapper[27819]: I0319 09:33:38.217665 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57227a66-c758-4a46-a5e1-f603baa3f570-service-ca-bundle\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:38.236531 master-0 kubenswrapper[27819]: I0319 09:33:38.236493 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 09:33:38.238946 master-0 kubenswrapper[27819]: I0319 09:33:38.238903 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/57227a66-c758-4a46-a5e1-f603baa3f570-metrics-certs\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:38.255708 master-0 kubenswrapper[27819]: I0319 09:33:38.255663 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 09:33:38.258031 master-0 kubenswrapper[27819]: I0319 09:33:38.257994 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d3fd276-2fe2-423a-b1ee-f27f1596d013-cert\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:33:38.276351 master-0 kubenswrapper[27819]: I0319 09:33:38.276295 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 09:33:38.294738 master-0 kubenswrapper[27819]: I0319 09:33:38.294690 27819 request.go:700] Waited for 2.001166283s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-4d2wn&limit=500&resourceVersion=0 Mar 19 09:33:38.296204 master-0 kubenswrapper[27819]: I0319 09:33:38.296177 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4d2wn" Mar 19 09:33:38.315928 master-0 kubenswrapper[27819]: I0319 09:33:38.315873 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 09:33:38.336150 master-0 kubenswrapper[27819]: I0319 09:33:38.336022 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-nst6c" Mar 19 09:33:38.355942 master-0 kubenswrapper[27819]: I0319 09:33:38.355893 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 09:33:38.375454 master-0 kubenswrapper[27819]: I0319 09:33:38.375391 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 09:33:38.378482 master-0 kubenswrapper[27819]: I0319 09:33:38.378430 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-client\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:38.395629 master-0 kubenswrapper[27819]: I0319 09:33:38.395589 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 09:33:38.397817 master-0 kubenswrapper[27819]: I0319 09:33:38.397783 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-serving-cert\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:38.415518 master-0 kubenswrapper[27819]: I0319 09:33:38.415462 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 09:33:38.417858 master-0 kubenswrapper[27819]: I0319 09:33:38.417804 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/561b7381-8439-4ccc-ac50-d7a50aeb0c55-encryption-config\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:38.435739 master-0 kubenswrapper[27819]: I0319 09:33:38.435649 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 09:33:38.437040 master-0 kubenswrapper[27819]: I0319 09:33:38.436990 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-audit-policies\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:38.455435 master-0 kubenswrapper[27819]: I0319 09:33:38.455361 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 09:33:38.460923 master-0 kubenswrapper[27819]: I0319 09:33:38.460871 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:38.476067 master-0 kubenswrapper[27819]: I0319 09:33:38.476003 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 09:33:38.478288 master-0 kubenswrapper[27819]: I0319 09:33:38.478235 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-etcd-serving-ca\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:38.495392 master-0 kubenswrapper[27819]: I0319 09:33:38.495309 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-qxk5n" Mar 19 09:33:38.516344 master-0 kubenswrapper[27819]: I0319 09:33:38.516280 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 09:33:38.519878 master-0 kubenswrapper[27819]: I0319 09:33:38.519831 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:38.536460 master-0 kubenswrapper[27819]: I0319 09:33:38.536384 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 09:33:38.538721 master-0 kubenswrapper[27819]: I0319 09:33:38.538667 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/561b7381-8439-4ccc-ac50-d7a50aeb0c55-trusted-ca-bundle\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:38.556115 master-0 kubenswrapper[27819]: I0319 09:33:38.556043 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 09:33:38.559820 master-0 kubenswrapper[27819]: I0319 09:33:38.559766 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1b230b9d-529c-4b28-bc73-659a28d7961a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:38.575530 master-0 kubenswrapper[27819]: I0319 09:33:38.575466 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 09:33:38.596691 master-0 kubenswrapper[27819]: I0319 09:33:38.596617 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-gmb5f" Mar 19 09:33:38.607746 master-0 kubenswrapper[27819]: I0319 09:33:38.607694 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:38.616659 master-0 kubenswrapper[27819]: I0319 09:33:38.616597 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 09:33:38.619389 master-0 kubenswrapper[27819]: I0319 09:33:38.619329 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:38.636295 master-0 kubenswrapper[27819]: I0319 09:33:38.636240 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 09:33:38.636608 master-0 kubenswrapper[27819]: I0319 09:33:38.636518 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:38.655927 master-0 kubenswrapper[27819]: I0319 09:33:38.655861 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 09:33:38.677039 master-0 kubenswrapper[27819]: I0319 09:33:38.676971 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 09:33:38.678127 master-0 kubenswrapper[27819]: I0319 09:33:38.678092 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ded5da9a-1447-46df-a8ff-ffd469562599-serving-cert\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:38.699577 master-0 kubenswrapper[27819]: I0319 09:33:38.697436 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 09:33:38.707702 master-0 kubenswrapper[27819]: I0319 09:33:38.707631 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/56365780-b87d-43fc-95f5-8a44166aecf8-metrics-tls\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:38.716229 master-0 kubenswrapper[27819]: I0319 09:33:38.716151 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 09:33:38.719908 master-0 kubenswrapper[27819]: I0319 09:33:38.719804 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ded5da9a-1447-46df-a8ff-ffd469562599-service-ca\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:38.736373 master-0 kubenswrapper[27819]: I0319 09:33:38.736286 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 09:33:38.737745 master-0 kubenswrapper[27819]: I0319 09:33:38.737675 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cef53432-93f5-4581-b3de-c8cc5cac2ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:33:38.756468 master-0 kubenswrapper[27819]: I0319 09:33:38.756403 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:33:38.775951 master-0 kubenswrapper[27819]: I0319 09:33:38.775858 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 09:33:38.780869 master-0 kubenswrapper[27819]: I0319 09:33:38.780808 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0adaea87-67d0-41a7-a1f3-855fdd483aca-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-fngzd\" (UID: \"0adaea87-67d0-41a7-a1f3-855fdd483aca\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:33:38.796612 master-0 kubenswrapper[27819]: I0319 09:33:38.796518 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 09:33:38.799195 master-0 kubenswrapper[27819]: I0319 09:33:38.799137 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:38.815735 master-0 kubenswrapper[27819]: I0319 09:33:38.815660 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 09:33:38.817018 master-0 kubenswrapper[27819]: I0319 09:33:38.816950 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3610f08-aba1-411d-aa6d-811b88acdb7b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:38.835517 master-0 kubenswrapper[27819]: I0319 09:33:38.835417 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:33:38.856085 master-0 kubenswrapper[27819]: I0319 09:33:38.855922 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4qjxp" Mar 19 09:33:38.875903 master-0 kubenswrapper[27819]: I0319 09:33:38.875847 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xmjpx" Mar 19 09:33:38.896825 master-0 kubenswrapper[27819]: I0319 09:33:38.896762 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-g7f7m" Mar 19 09:33:38.916006 master-0 kubenswrapper[27819]: I0319 09:33:38.915935 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 09:33:38.917332 master-0 kubenswrapper[27819]: I0319 09:33:38.917279 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-proxy-tls\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:38.935584 master-0 kubenswrapper[27819]: I0319 09:33:38.935479 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 09:33:38.936516 master-0 kubenswrapper[27819]: E0319 09:33:38.936477 27819 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.936604 master-0 kubenswrapper[27819]: E0319 09:33:38.936575 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls podName:67e5534b-f428-45cf-b54e-d06b25dc3e09 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.936557391 +0000 UTC m=+4.858135083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls") pod "machine-config-controller-b4f87c5b9-k7nfp" (UID: "67e5534b-f428-45cf-b54e-d06b25dc3e09") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.936771 master-0 kubenswrapper[27819]: E0319 09:33:38.936735 27819 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:38.936936 master-0 kubenswrapper[27819]: E0319 09:33:38.936909 27819 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.936988 master-0 kubenswrapper[27819]: E0319 09:33:38.936931 27819 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.937017 master-0 kubenswrapper[27819]: E0319 09:33:38.936988 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images podName:cd1425b9-fcd1-4aba-899f-e110eebce626 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.936810347 +0000 UTC m=+4.858388099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images") pod "machine-api-operator-6fbb6cf6f9-9jbdl" (UID: "cd1425b9-fcd1-4aba-899f-e110eebce626") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:38.937048 master-0 kubenswrapper[27819]: E0319 09:33:38.937015 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs podName:e0491730-604c-4a66-b827-458da88d262b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.937005013 +0000 UTC m=+4.858582725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs") pod "machine-config-server-nsnqt" (UID: "e0491730-604c-4a66-b827-458da88d262b") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.937048 master-0 kubenswrapper[27819]: E0319 09:33:38.937036 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token podName:e0491730-604c-4a66-b827-458da88d262b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.937027813 +0000 UTC m=+4.858605635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token") pod "machine-config-server-nsnqt" (UID: "e0491730-604c-4a66-b827-458da88d262b") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.938385 master-0 kubenswrapper[27819]: E0319 09:33:38.938348 27819 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:38.938452 master-0 kubenswrapper[27819]: E0319 09:33:38.938379 27819 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.938452 master-0 kubenswrapper[27819]: E0319 09:33:38.938414 27819 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.938452 master-0 kubenswrapper[27819]: E0319 09:33:38.938444 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls podName:3f81774a-22a4-4335-961b-04e53e0f3b5e nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.938427552 +0000 UTC m=+4.860005354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-tqnnm" (UID: "3f81774a-22a4-4335-961b-04e53e0f3b5e") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.938581 master-0 kubenswrapper[27819]: E0319 09:33:38.938464 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config podName:cd1425b9-fcd1-4aba-899f-e110eebce626 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.938456802 +0000 UTC m=+4.860034494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config") pod "machine-api-operator-6fbb6cf6f9-9jbdl" (UID: "cd1425b9-fcd1-4aba-899f-e110eebce626") : failed to sync configmap cache: timed out waiting for the condition Mar 19 09:33:38.938581 master-0 kubenswrapper[27819]: E0319 09:33:38.938475 27819 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.938581 master-0 kubenswrapper[27819]: E0319 09:33:38.938486 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls podName:cd1425b9-fcd1-4aba-899f-e110eebce626 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.938476393 +0000 UTC m=+4.860054085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-9jbdl" (UID: "cd1425b9-fcd1-4aba-899f-e110eebce626") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.938581 master-0 kubenswrapper[27819]: E0319 09:33:38.938532 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls podName:c3610f08-aba1-411d-aa6d-811b88acdb7b nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.938502994 +0000 UTC m=+4.860080756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-sbgz2" (UID: "c3610f08-aba1-411d-aa6d-811b88acdb7b") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.939558 master-0 kubenswrapper[27819]: E0319 09:33:38.939507 27819 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.939617 master-0 kubenswrapper[27819]: E0319 09:33:38.939562 27819 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.939617 master-0 kubenswrapper[27819]: E0319 09:33:38.939592 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls podName:6ed4ce2b-080f-4523-8527-eee768e06123 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.939576824 +0000 UTC m=+4.861154576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-vxndj" (UID: "6ed4ce2b-080f-4523-8527-eee768e06123") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.939617 master-0 kubenswrapper[27819]: E0319 09:33:38.939610 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config podName:3f81774a-22a4-4335-961b-04e53e0f3b5e nodeName:}" failed. No retries permitted until 2026-03-19 09:33:39.939602965 +0000 UTC m=+4.861180677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-tqnnm" (UID: "3f81774a-22a4-4335-961b-04e53e0f3b5e") : failed to sync secret cache: timed out waiting for the condition Mar 19 09:33:38.955636 master-0 kubenswrapper[27819]: I0319 09:33:38.955525 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 09:33:38.986638 master-0 kubenswrapper[27819]: I0319 09:33:38.986562 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2hfh\" (UniqueName: \"kubernetes.io/projected/012cdc1d-ebc8-431e-9a52-9a39de95dd0d-kube-api-access-x2hfh\") pod \"service-ca-operator-b865698dc-pfs65\" (UID: \"012cdc1d-ebc8-431e-9a52-9a39de95dd0d\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-pfs65" Mar 19 09:33:39.006939 master-0 kubenswrapper[27819]: I0319 09:33:39.006873 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7thvr\" (UniqueName: \"kubernetes.io/projected/16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff-kube-api-access-7thvr\") pod \"openshift-controller-manager-operator-8c94f4649-6vplt\" (UID: \"16a69ef7-2fc3-44e4-bc5c-ed50778ef9ff\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-6vplt" Mar 19 09:33:39.027876 master-0 kubenswrapper[27819]: I0319 09:33:39.027804 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-bound-sa-token\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:39.036288 master-0 kubenswrapper[27819]: I0319 09:33:39.036247 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 09:33:39.056416 master-0 kubenswrapper[27819]: I0319 09:33:39.056375 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 09:33:39.075737 master-0 kubenswrapper[27819]: I0319 09:33:39.075669 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 09:33:39.095951 master-0 kubenswrapper[27819]: I0319 09:33:39.095890 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-sjg6x" Mar 19 09:33:39.116260 master-0 kubenswrapper[27819]: I0319 09:33:39.116140 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 09:33:39.135527 master-0 kubenswrapper[27819]: I0319 09:33:39.135470 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 09:33:39.167011 master-0 kubenswrapper[27819]: I0319 09:33:39.166901 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbvbr\" (UniqueName: \"kubernetes.io/projected/b42aee2f-bffc-4c43-bf20-16d9c67d216c-kube-api-access-lbvbr\") pod \"network-check-source-b4bf74f6-tk6ns\" (UID: \"b42aee2f-bffc-4c43-bf20-16d9c67d216c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-tk6ns" Mar 19 09:33:39.176334 master-0 kubenswrapper[27819]: I0319 09:33:39.176290 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-45rfb" Mar 19 09:33:39.206729 master-0 kubenswrapper[27819]: I0319 09:33:39.206674 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr4bl\" (UniqueName: \"kubernetes.io/projected/70258988-8374-4aee-aaa2-be3c2e853062-kube-api-access-tr4bl\") pod \"openshift-apiserver-operator-d65958b8-hrb9m\" (UID: \"70258988-8374-4aee-aaa2-be3c2e853062\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-hrb9m" Mar 19 09:33:39.226904 master-0 kubenswrapper[27819]: I0319 09:33:39.226824 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w5fk\" (UniqueName: \"kubernetes.io/projected/09cc190d-5647-40a1-bfe9-5355bcb33b10-kube-api-access-4w5fk\") pod \"multus-8pt59\" (UID: \"09cc190d-5647-40a1-bfe9-5355bcb33b10\") " pod="openshift-multus/multus-8pt59" Mar 19 09:33:39.235904 master-0 kubenswrapper[27819]: I0319 09:33:39.235824 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 09:33:39.256252 master-0 kubenswrapper[27819]: I0319 09:33:39.256195 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 09:33:39.275342 master-0 kubenswrapper[27819]: I0319 09:33:39.275300 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2ncgt" Mar 19 09:33:39.296707 master-0 kubenswrapper[27819]: I0319 09:33:39.296658 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-nlbmt" Mar 19 09:33:39.314842 master-0 kubenswrapper[27819]: I0319 09:33:39.314805 27819 request.go:700] Waited for 3.010431672s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 19 09:33:39.316809 master-0 kubenswrapper[27819]: I0319 09:33:39.316762 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 09:33:39.335526 master-0 kubenswrapper[27819]: I0319 09:33:39.335500 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-szhzh" Mar 19 09:33:39.356735 master-0 kubenswrapper[27819]: I0319 09:33:39.356704 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 09:33:39.376287 master-0 kubenswrapper[27819]: I0319 09:33:39.376167 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x7brr" Mar 19 09:33:39.396879 master-0 kubenswrapper[27819]: I0319 09:33:39.396842 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 09:33:39.415799 master-0 kubenswrapper[27819]: I0319 09:33:39.415740 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 09:33:39.435807 master-0 kubenswrapper[27819]: I0319 09:33:39.435757 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-fxmqq" Mar 19 09:33:39.470674 master-0 kubenswrapper[27819]: I0319 09:33:39.470618 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h925l\" (UniqueName: \"kubernetes.io/projected/676f4062-ea34-48d0-80d7-3cd3d9da341e-kube-api-access-h925l\") pod \"cluster-monitoring-operator-58845fbb57-wptdb\" (UID: \"676f4062-ea34-48d0-80d7-3cd3d9da341e\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-wptdb" Mar 19 09:33:39.486593 master-0 kubenswrapper[27819]: I0319 09:33:39.486526 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53bff8e4-bf60-4386-8905-49d43fd6c420-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-8bz9x\" (UID: \"53bff8e4-bf60-4386-8905-49d43fd6c420\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-8bz9x" Mar 19 09:33:39.507471 master-0 kubenswrapper[27819]: I0319 09:33:39.507398 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnxk9\" (UniqueName: \"kubernetes.io/projected/70e8c62b-97c3-4c0c-85d3-f660118831fd-kube-api-access-bnxk9\") pod \"insights-operator-68bf6ff9d6-h4zrl\" (UID: \"70e8c62b-97c3-4c0c-85d3-f660118831fd\") " pod="openshift-insights/insights-operator-68bf6ff9d6-h4zrl" Mar 19 09:33:39.526662 master-0 kubenswrapper[27819]: I0319 09:33:39.526600 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/46c7cde3-2cb4-4fa8-94ca-d5feff877da9-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-f5zsd\" (UID: \"46c7cde3-2cb4-4fa8-94ca-d5feff877da9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-f5zsd" Mar 19 09:33:39.548722 master-0 kubenswrapper[27819]: I0319 09:33:39.548666 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpcnv\" (UniqueName: \"kubernetes.io/projected/47da8964-3606-4181-87fb-8f04a3065295-kube-api-access-wpcnv\") pod \"network-node-identity-t7zwh\" (UID: \"47da8964-3606-4181-87fb-8f04a3065295\") " pod="openshift-network-node-identity/network-node-identity-t7zwh" Mar 19 09:33:39.568645 master-0 kubenswrapper[27819]: I0319 09:33:39.568582 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbw6q\" (UniqueName: \"kubernetes.io/projected/1187ddcd-3b78-4b3f-9b12-06ce76cb6040-kube-api-access-zbw6q\") pod \"ovnkube-control-plane-57f769d897-dbqlt\" (UID: \"1187ddcd-3b78-4b3f-9b12-06ce76cb6040\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-dbqlt" Mar 19 09:33:39.586851 master-0 kubenswrapper[27819]: I0319 09:33:39.586802 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca2f7cb3-8812-4fe3-83a5-61668ef87f99-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wrdsg\" (UID: \"ca2f7cb3-8812-4fe3-83a5-61668ef87f99\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wrdsg" Mar 19 09:33:39.607169 master-0 kubenswrapper[27819]: I0319 09:33:39.607123 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7rj\" (UniqueName: \"kubernetes.io/projected/525b41b5-82d8-4d47-8350-79644a2c9360-kube-api-access-8s7rj\") pod \"cluster-storage-operator-7d87854d6-cgsgk\" (UID: \"525b41b5-82d8-4d47-8350-79644a2c9360\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-cgsgk" Mar 19 09:33:39.648730 master-0 kubenswrapper[27819]: I0319 09:33:39.646453 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvtc\" (UniqueName: \"kubernetes.io/projected/bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c-kube-api-access-smvtc\") pod \"network-operator-7bd846bfc4-gkvf5\" (UID: \"bc4a026a-8f3d-4dbb-a6bc-793d5fdca46c\") " pod="openshift-network-operator/network-operator-7bd846bfc4-gkvf5" Mar 19 09:33:39.652261 master-0 kubenswrapper[27819]: I0319 09:33:39.652223 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85vjd\" (UniqueName: \"kubernetes.io/projected/17e0cb4a-e776-4886-927e-ae446af7f234-kube-api-access-85vjd\") pod \"cluster-olm-operator-67dcd4998-pqxp5\" (UID: \"17e0cb4a-e776-4886-927e-ae446af7f234\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-pqxp5" Mar 19 09:33:39.667761 master-0 kubenswrapper[27819]: I0319 09:33:39.667711 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdvd\" (UniqueName: \"kubernetes.io/projected/1e14d946-54b8-4a3d-ae9f-ae82c5393ad4-kube-api-access-jrdvd\") pod \"etcd-operator-8544cbcf9c-cfmgj\" (UID: \"1e14d946-54b8-4a3d-ae9f-ae82c5393ad4\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-cfmgj" Mar 19 09:33:39.686507 master-0 kubenswrapper[27819]: I0319 09:33:39.686454 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c654s\" (UniqueName: \"kubernetes.io/projected/a67ae8dc-240d-4708-9139-1d49c601e552-kube-api-access-c654s\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf\" (UID: \"a67ae8dc-240d-4708-9139-1d49c601e552\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-9jfdf" Mar 19 09:33:39.707787 master-0 kubenswrapper[27819]: I0319 09:33:39.707721 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnp7\" (UniqueName: \"kubernetes.io/projected/3a07456d-2e8e-4e80-a777-d0903ad21f07-kube-api-access-qvnp7\") pod \"cluster-baremetal-operator-6f69995874-sw7cc\" (UID: \"3a07456d-2e8e-4e80-a777-d0903ad21f07\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-sw7cc" Mar 19 09:33:39.733743 master-0 kubenswrapper[27819]: I0319 09:33:39.733688 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfnn\" (UniqueName: \"kubernetes.io/projected/cdcc18f9-66cf-45d9-965d-d0a57fcf285c-kube-api-access-4tfnn\") pod \"ovnkube-node-zmrpw\" (UID: \"cdcc18f9-66cf-45d9-965d-d0a57fcf285c\") " pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:39.756233 master-0 kubenswrapper[27819]: I0319 09:33:39.756179 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58zw\" (UniqueName: \"kubernetes.io/projected/672ad0aa-a0c5-4640-840d-3ffa02c55d62-kube-api-access-t58zw\") pod \"iptables-alerter-p9bbz\" (UID: \"672ad0aa-a0c5-4640-840d-3ffa02c55d62\") " pod="openshift-network-operator/iptables-alerter-p9bbz" Mar 19 09:33:39.767532 master-0 kubenswrapper[27819]: I0319 09:33:39.767466 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4n26\" (UniqueName: \"kubernetes.io/projected/d6cd2eac-6412-4f38-8272-743c67b218a3-kube-api-access-x4n26\") pod \"cluster-image-registry-operator-5549dc66cb-nc9rw\" (UID: \"d6cd2eac-6412-4f38-8272-743c67b218a3\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-nc9rw" Mar 19 09:33:39.786999 master-0 kubenswrapper[27819]: I0319 09:33:39.786956 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8cg\" (UniqueName: \"kubernetes.io/projected/45523224-f530-4354-90de-7fd65a1a3911-kube-api-access-8l8cg\") pod \"dns-operator-9c5679d8f-k89rz\" (UID: \"45523224-f530-4354-90de-7fd65a1a3911\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-k89rz" Mar 19 09:33:39.807870 master-0 kubenswrapper[27819]: I0319 09:33:39.807816 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll8k\" (UniqueName: \"kubernetes.io/projected/e25a16f3-dfe0-49c5-a31d-e310d369f406-kube-api-access-tll8k\") pod \"olm-operator-5c9796789-fts6w\" (UID: \"e25a16f3-dfe0-49c5-a31d-e310d369f406\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:33:39.826738 master-0 kubenswrapper[27819]: I0319 09:33:39.826711 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbzvl\" (UniqueName: \"kubernetes.io/projected/8bdeb4f3-99f7-44ef-beac-53c3cc073c5a-kube-api-access-rbzvl\") pod \"ingress-operator-66b84d69b-vfnhd\" (UID: \"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" Mar 19 09:33:39.842562 master-0 kubenswrapper[27819]: I0319 09:33:39.842511 27819 scope.go:117] "RemoveContainer" containerID="1f24a4a0dde2654722d413cb5a1fcc7148d3e4eca845a455dcdbfc442d3a81b7" Mar 19 09:33:39.852105 master-0 kubenswrapper[27819]: I0319 09:33:39.852055 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqts\" (UniqueName: \"kubernetes.io/projected/e0491730-604c-4a66-b827-458da88d262b-kube-api-access-gmqts\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:39.872580 master-0 kubenswrapper[27819]: I0319 09:33:39.872511 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flln7\" (UniqueName: \"kubernetes.io/projected/57227a66-c758-4a46-a5e1-f603baa3f570-kube-api-access-flln7\") pod \"router-default-7dcf5569b5-k99cg\" (UID: \"57227a66-c758-4a46-a5e1-f603baa3f570\") " pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:39.898110 master-0 kubenswrapper[27819]: I0319 09:33:39.898076 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqc86\" (UniqueName: \"kubernetes.io/projected/9d3fd276-2fe2-423a-b1ee-f27f1596d013-kube-api-access-cqc86\") pod \"ingress-canary-6r9c4\" (UID: \"9d3fd276-2fe2-423a-b1ee-f27f1596d013\") " pod="openshift-ingress-canary/ingress-canary-6r9c4" Mar 19 09:33:39.908297 master-0 kubenswrapper[27819]: I0319 09:33:39.908197 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jns5r\" (UniqueName: \"kubernetes.io/projected/3eeb72c3-1a56-4955-845e-81607513b1b2-kube-api-access-jns5r\") pod \"migrator-8487694857-nsnds\" (UID: \"3eeb72c3-1a56-4955-845e-81607513b1b2\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-nsnds" Mar 19 09:33:39.928328 master-0 kubenswrapper[27819]: I0319 09:33:39.928300 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w48g\" (UniqueName: \"kubernetes.io/projected/3f81774a-22a4-4335-961b-04e53e0f3b5e-kube-api-access-2w48g\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:39.953185 master-0 kubenswrapper[27819]: I0319 09:33:39.953125 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9tpx\" (UniqueName: \"kubernetes.io/projected/7825a2ac-eab6-4988-861a-9e3bfdf5dcc8-kube-api-access-s9tpx\") pod \"cluster-autoscaler-operator-866dc4744-p4hvm\" (UID: \"7825a2ac-eab6-4988-861a-9e3bfdf5dcc8\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-p4hvm" Mar 19 09:33:39.968602 master-0 kubenswrapper[27819]: I0319 09:33:39.968531 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ded5da9a-1447-46df-a8ff-ffd469562599-kube-api-access\") pod \"cluster-version-operator-7d58488df-dgqfl\" (UID: \"ded5da9a-1447-46df-a8ff-ffd469562599\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-dgqfl" Mar 19 09:33:39.972533 master-0 kubenswrapper[27819]: I0319 09:33:39.972482 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:33:39.972645 master-0 kubenswrapper[27819]: I0319 09:33:39.972576 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:39.972720 master-0 kubenswrapper[27819]: I0319 09:33:39.972654 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:39.972720 master-0 kubenswrapper[27819]: I0319 09:33:39.972678 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:39.972720 master-0 kubenswrapper[27819]: I0319 09:33:39.972707 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:39.972838 master-0 kubenswrapper[27819]: I0319 09:33:39.972738 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:39.972838 master-0 kubenswrapper[27819]: I0319 09:33:39.972822 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:39.972931 master-0 kubenswrapper[27819]: I0319 09:33:39.972920 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:39.972980 master-0 kubenswrapper[27819]: I0319 09:33:39.972944 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:39.972980 master-0 kubenswrapper[27819]: I0319 09:33:39.972969 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:39.973341 master-0 kubenswrapper[27819]: I0319 09:33:39.973310 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3610f08-aba1-411d-aa6d-811b88acdb7b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:39.973522 master-0 kubenswrapper[27819]: I0319 09:33:39.973495 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-certs\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:39.973712 master-0 kubenswrapper[27819]: I0319 09:33:39.973681 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-images\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:39.973966 master-0 kubenswrapper[27819]: I0319 09:33:39.973691 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd1425b9-fcd1-4aba-899f-e110eebce626-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:39.974073 master-0 kubenswrapper[27819]: I0319 09:33:39.973885 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:39.974163 master-0 kubenswrapper[27819]: I0319 09:33:39.974094 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ed4ce2b-080f-4523-8527-eee768e06123-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:33:39.974264 master-0 kubenswrapper[27819]: I0319 09:33:39.974146 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd1425b9-fcd1-4aba-899f-e110eebce626-config\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:39.974357 master-0 kubenswrapper[27819]: I0319 09:33:39.974250 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67e5534b-f428-45cf-b54e-d06b25dc3e09-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:39.974661 master-0 kubenswrapper[27819]: I0319 09:33:39.974472 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e0491730-604c-4a66-b827-458da88d262b-node-bootstrap-token\") pod \"machine-config-server-nsnqt\" (UID: \"e0491730-604c-4a66-b827-458da88d262b\") " pod="openshift-machine-config-operator/machine-config-server-nsnqt" Mar 19 09:33:39.974661 master-0 kubenswrapper[27819]: I0319 09:33:39.974264 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3f81774a-22a4-4335-961b-04e53e0f3b5e-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-tqnnm\" (UID: \"3f81774a-22a4-4335-961b-04e53e0f3b5e\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-tqnnm" Mar 19 09:33:40.001451 master-0 kubenswrapper[27819]: I0319 09:33:40.001395 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4jnj\" (UniqueName: \"kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj\") pod \"metrics-server-7c64897fc5-qj6vj\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:40.006215 master-0 kubenswrapper[27819]: I0319 09:33:40.006177 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svz6j\" (UniqueName: \"kubernetes.io/projected/1669b77c-4bef-42d5-ad0b-63c12a6677b2-kube-api-access-svz6j\") pod \"apiserver-6f6b54748-s5cpx\" (UID: \"1669b77c-4bef-42d5-ad0b-63c12a6677b2\") " pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:40.028966 master-0 kubenswrapper[27819]: I0319 09:33:40.028908 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgrjz\" (UniqueName: \"kubernetes.io/projected/d80f71af-e3ff-4a9f-8c9c-883a6a5581d0-kube-api-access-lgrjz\") pod \"telemeter-client-8699f95c5b-7w9vq\" (UID: \"d80f71af-e3ff-4a9f-8c9c-883a6a5581d0\") " pod="openshift-monitoring/telemeter-client-8699f95c5b-7w9vq" Mar 19 09:33:40.047292 master-0 kubenswrapper[27819]: I0319 09:33:40.047197 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8p7b\" (UniqueName: \"kubernetes.io/projected/5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5-kube-api-access-g8p7b\") pod \"redhat-operators-7cczg\" (UID: \"5f7ae648-6a07-4d41-9ce9-40b5fc9be2e5\") " pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:40.067213 master-0 kubenswrapper[27819]: I0319 09:33:40.067153 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxn9l\" (UniqueName: \"kubernetes.io/projected/72756f50-c970-4ef6-b8ca-88e49f996a74-kube-api-access-zxn9l\") pod \"community-operators-887wl\" (UID: \"72756f50-c970-4ef6-b8ca-88e49f996a74\") " pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:40.091763 master-0 kubenswrapper[27819]: I0319 09:33:40.091700 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfmmt\" (UniqueName: \"kubernetes.io/projected/3883b232-5772-460f-9e94-b4cbc7b7e638-kube-api-access-nfmmt\") pod \"kube-state-metrics-7bbc969446-d46h5\" (UID: \"3883b232-5772-460f-9e94-b4cbc7b7e638\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-d46h5" Mar 19 09:33:40.107475 master-0 kubenswrapper[27819]: I0319 09:33:40.107421 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6t27\" (UniqueName: \"kubernetes.io/projected/561b7381-8439-4ccc-ac50-d7a50aeb0c55-kube-api-access-t6t27\") pod \"apiserver-775788bf78-tgdnw\" (UID: \"561b7381-8439-4ccc-ac50-d7a50aeb0c55\") " pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:40.128173 master-0 kubenswrapper[27819]: I0319 09:33:40.128133 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbmx9\" (UniqueName: \"kubernetes.io/projected/a591384f-f83e-4f65-b5d0-d519f05edbd9-kube-api-access-vbmx9\") pod \"node-resolver-mf78p\" (UID: \"a591384f-f83e-4f65-b5d0-d519f05edbd9\") " pod="openshift-dns/node-resolver-mf78p" Mar 19 09:33:40.152763 master-0 kubenswrapper[27819]: I0319 09:33:40.152712 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp5rd\" (UniqueName: \"kubernetes.io/projected/2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e-kube-api-access-rp5rd\") pod \"machine-config-daemon-rw7tg\" (UID: \"2ea94b52-7d8f-4b88-97c7-ff1a774a5f8e\") " pod="openshift-machine-config-operator/machine-config-daemon-rw7tg" Mar 19 09:33:40.169098 master-0 kubenswrapper[27819]: I0319 09:33:40.168968 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5hmg\" (UniqueName: \"kubernetes.io/projected/de72ea6c-f3ce-41a5-9a43-9db4f27ed84b-kube-api-access-k5hmg\") pod \"csi-snapshot-controller-64854d9cff-blgk8\" (UID: \"de72ea6c-f3ce-41a5-9a43-9db4f27ed84b\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-blgk8" Mar 19 09:33:40.188790 master-0 kubenswrapper[27819]: I0319 09:33:40.188744 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rzx9\" (UniqueName: \"kubernetes.io/projected/56365780-b87d-43fc-95f5-8a44166aecf8-kube-api-access-5rzx9\") pod \"dns-default-9xr8p\" (UID: \"56365780-b87d-43fc-95f5-8a44166aecf8\") " pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:40.209863 master-0 kubenswrapper[27819]: I0319 09:33:40.209805 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrmjf\" (UniqueName: \"kubernetes.io/projected/d5d9fbaf-ba14-4d2b-8376-1634eabbc782-kube-api-access-rrmjf\") pod \"operator-controller-controller-manager-57777556ff-7v7bv\" (UID: \"d5d9fbaf-ba14-4d2b-8376-1634eabbc782\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:40.227764 master-0 kubenswrapper[27819]: I0319 09:33:40.227717 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssdjz\" (UniqueName: \"kubernetes.io/projected/31e46a34-8a00-4bb3-869b-8a5911ef6cf8-kube-api-access-ssdjz\") pod \"node-exporter-k6kn8\" (UID: \"31e46a34-8a00-4bb3-869b-8a5911ef6cf8\") " pod="openshift-monitoring/node-exporter-k6kn8" Mar 19 09:33:40.247592 master-0 kubenswrapper[27819]: I0319 09:33:40.247524 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mncvz\" (UniqueName: \"kubernetes.io/projected/e8a7e077-3f6c-4efb-9865-cf82480c5da1-kube-api-access-mncvz\") pod \"redhat-marketplace-brpbp\" (UID: \"e8a7e077-3f6c-4efb-9865-cf82480c5da1\") " pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:40.267143 master-0 kubenswrapper[27819]: I0319 09:33:40.267104 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2ng\" (UniqueName: \"kubernetes.io/projected/14ee9a22-5b04-402c-98e9-35e2eb7cb2a2-kube-api-access-7g2ng\") pod \"machine-approver-5c6485487f-ttn8h\" (UID: \"14ee9a22-5b04-402c-98e9-35e2eb7cb2a2\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-ttn8h" Mar 19 09:33:40.286870 master-0 kubenswrapper[27819]: I0319 09:33:40.286820 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtjps\" (UniqueName: \"kubernetes.io/projected/55440bf9-0881-4823-af64-5652c2ad89ff-kube-api-access-gtjps\") pod \"packageserver-57475586f6-pnw8k\" (UID: \"55440bf9-0881-4823-af64-5652c2ad89ff\") " pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:40.307752 master-0 kubenswrapper[27819]: I0319 09:33:40.307686 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2vbp\" (UniqueName: \"kubernetes.io/projected/cd1425b9-fcd1-4aba-899f-e110eebce626-kube-api-access-s2vbp\") pod \"machine-api-operator-6fbb6cf6f9-9jbdl\" (UID: \"cd1425b9-fcd1-4aba-899f-e110eebce626\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-9jbdl" Mar 19 09:33:40.328204 master-0 kubenswrapper[27819]: I0319 09:33:40.328149 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxz2j\" (UniqueName: \"kubernetes.io/projected/1b230b9d-529c-4b28-bc73-659a28d7961a-kube-api-access-mxz2j\") pod \"openshift-state-metrics-5dc6c74576-84ztr\" (UID: \"1b230b9d-529c-4b28-bc73-659a28d7961a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-84ztr" Mar 19 09:33:40.334569 master-0 kubenswrapper[27819]: I0319 09:33:40.334492 27819 request.go:700] Waited for 3.932568689s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token Mar 19 09:33:40.356739 master-0 kubenswrapper[27819]: I0319 09:33:40.356695 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtj5f\" (UniqueName: \"kubernetes.io/projected/4a73a5b0-478f-496d-8b0c-9e3daf39c082-kube-api-access-qtj5f\") pod \"multus-admission-controller-58c9f8fc64-69cqn\" (UID: \"4a73a5b0-478f-496d-8b0c-9e3daf39c082\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-69cqn" Mar 19 09:33:40.380044 master-0 kubenswrapper[27819]: I0319 09:33:40.379992 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s45nc\" (UniqueName: \"kubernetes.io/projected/67e5534b-f428-45cf-b54e-d06b25dc3e09-kube-api-access-s45nc\") pod \"machine-config-controller-b4f87c5b9-k7nfp\" (UID: \"67e5534b-f428-45cf-b54e-d06b25dc3e09\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-k7nfp" Mar 19 09:33:40.402451 master-0 kubenswrapper[27819]: I0319 09:33:40.402419 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwbr\" (UniqueName: \"kubernetes.io/projected/d504cbc7-5c09-4712-9f7a-c41a6386ef79-kube-api-access-tmwbr\") pod \"certified-operators-l26xf\" (UID: \"d504cbc7-5c09-4712-9f7a-c41a6386ef79\") " pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:40.411893 master-0 kubenswrapper[27819]: I0319 09:33:40.411843 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npg9k\" (UniqueName: \"kubernetes.io/projected/dde1a2d9-a43e-4b26-82d7-e0f83577468f-kube-api-access-npg9k\") pod \"tuned-5r5sh\" (UID: \"dde1a2d9-a43e-4b26-82d7-e0f83577468f\") " pod="openshift-cluster-node-tuning-operator/tuned-5r5sh" Mar 19 09:33:40.427738 master-0 kubenswrapper[27819]: I0319 09:33:40.427621 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdgvx\" (UniqueName: \"kubernetes.io/projected/c3610f08-aba1-411d-aa6d-811b88acdb7b-kube-api-access-jdgvx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-sbgz2\" (UID: \"c3610f08-aba1-411d-aa6d-811b88acdb7b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-sbgz2" Mar 19 09:33:40.448056 master-0 kubenswrapper[27819]: I0319 09:33:40.448010 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfdkb\" (UniqueName: \"kubernetes.io/projected/14438c84-72d3-4f45-88a4-fc7e80df5fb8-kube-api-access-dfdkb\") pod \"cloud-credential-operator-744f9dbf77-97lvq\" (UID: \"14438c84-72d3-4f45-88a4-fc7e80df5fb8\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-97lvq" Mar 19 09:33:40.469302 master-0 kubenswrapper[27819]: I0319 09:33:40.469245 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nql4h\" (UniqueName: \"kubernetes.io/projected/6ed4ce2b-080f-4523-8527-eee768e06123-kube-api-access-nql4h\") pod \"cluster-samples-operator-85f7577d78-vxndj\" (UID: \"6ed4ce2b-080f-4523-8527-eee768e06123\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-vxndj" Mar 19 09:33:40.485978 master-0 kubenswrapper[27819]: I0319 09:33:40.485949 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6m8\" (UniqueName: \"kubernetes.io/projected/d58c6b38-ef11-465c-9fee-b83b84ce4669-kube-api-access-bs6m8\") pod \"catalogd-controller-manager-6864dc98f7-rgzxb\" (UID: \"d58c6b38-ef11-465c-9fee-b83b84ce4669\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:40.508488 master-0 kubenswrapper[27819]: I0319 09:33:40.508458 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9t7v\" (UniqueName: \"kubernetes.io/projected/fed75514-8f48-40b7-9fed-0afd6042cfbf-kube-api-access-h9t7v\") pod \"service-ca-79bc6b8d76-4lbsc\" (UID: \"fed75514-8f48-40b7-9fed-0afd6042cfbf\") " pod="openshift-service-ca/service-ca-79bc6b8d76-4lbsc" Mar 19 09:33:40.529184 master-0 kubenswrapper[27819]: I0319 09:33:40.529130 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9vh\" (UniqueName: \"kubernetes.io/projected/cef53432-93f5-4581-b3de-c8cc5cac2ecb-kube-api-access-sm9vh\") pod \"control-plane-machine-set-operator-6f97756bc8-chzwl\" (UID: \"cef53432-93f5-4581-b3de-c8cc5cac2ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-chzwl" Mar 19 09:33:40.546863 master-0 kubenswrapper[27819]: E0319 09:33:40.546814 27819 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:40.547081 master-0 kubenswrapper[27819]: E0319 09:33:40.547068 27819 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:40.547265 master-0 kubenswrapper[27819]: E0319 09:33:40.547209 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access podName:98826625-8de0-4bf7-8926-ec62517369e5 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:41.047189255 +0000 UTC m=+5.968766947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access") pod "installer-4-master-0" (UID: "98826625-8de0-4bf7-8926-ec62517369e5") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:40.563935 master-0 kubenswrapper[27819]: E0319 09:33:40.563884 27819 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.285s" Mar 19 09:33:40.564116 master-0 kubenswrapper[27819]: I0319 09:33:40.563958 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"6950dbd162496ff96ac22cf66872a0f41be7ebc9910fbd50974ed34a97c4be41"} Mar 19 09:33:40.564116 master-0 kubenswrapper[27819]: I0319 09:33:40.563997 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:40.564116 master-0 kubenswrapper[27819]: I0319 09:33:40.564012 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 09:33:40.564116 master-0 kubenswrapper[27819]: I0319 09:33:40.564024 27819 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="34655e22-f69d-4875-b45b-5a476777e894" Mar 19 09:33:40.564116 master-0 kubenswrapper[27819]: I0319 09:33:40.564041 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerStarted","Data":"59bfbc24aed025cbeb33d0e5a40c5d7418d9f9aec04c5fd5b96dbe02fab0ba33"} Mar 19 09:33:40.572141 master-0 kubenswrapper[27819]: I0319 09:33:40.572109 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 19 09:33:40.584909 master-0 kubenswrapper[27819]: I0319 09:33:40.584848 27819 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:40.608101 master-0 kubenswrapper[27819]: I0319 09:33:40.608031 27819 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:40.629147 master-0 kubenswrapper[27819]: I0319 09:33:40.629092 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-vfnhd_8bdeb4f3-99f7-44ef-beac-53c3cc073c5a/ingress-operator/4.log" Mar 19 09:33:40.637345 master-0 kubenswrapper[27819]: I0319 09:33:40.637278 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 19 09:33:40.637345 master-0 kubenswrapper[27819]: I0319 09:33:40.637317 27819 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="34655e22-f69d-4875-b45b-5a476777e894" Mar 19 09:33:40.637345 master-0 kubenswrapper[27819]: I0319 09:33:40.637334 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637409 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637422 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637433 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637441 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637451 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerStarted","Data":"81557cf106fbd5f4a3b2964beaaeaf69341eb9b15abccbc6d3aef5351309e1d6"} Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637487 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637498 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637519 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerStarted","Data":"74b087b8a1f11417cfbc6b3012b38420ffa8a4dbed87e2e5a22cd51bf2974639"} Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637530 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637560 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerStarted","Data":"32eb7fb05bc6b163861c244117b54dba57fa4d47af128f0765f0e871f04fa152"} Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637597 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637625 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:40.637633 master-0 kubenswrapper[27819]: I0319 09:33:40.637642 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-2k7c5" Mar 19 09:33:40.638285 master-0 kubenswrapper[27819]: I0319 09:33:40.637652 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-vfnhd" event={"ID":"8bdeb4f3-99f7-44ef-beac-53c3cc073c5a","Type":"ContainerStarted","Data":"bef2ef165f03e2c35651b0cca247c2410fff8092cfefdb20bc1f0b3edfc60cc1"} Mar 19 09:33:40.638285 master-0 kubenswrapper[27819]: I0319 09:33:40.637673 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:40.638285 master-0 kubenswrapper[27819]: I0319 09:33:40.637689 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9xr8p" Mar 19 09:33:40.638285 master-0 kubenswrapper[27819]: I0319 09:33:40.637831 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:40.661968 master-0 kubenswrapper[27819]: I0319 09:33:40.661923 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 19 09:33:40.686199 master-0 kubenswrapper[27819]: I0319 09:33:40.686135 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:33:40.900426 master-0 kubenswrapper[27819]: I0319 09:33:40.900370 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:40.935442 master-0 kubenswrapper[27819]: I0319 09:33:40.935341 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:41.092970 master-0 kubenswrapper[27819]: I0319 09:33:41.092405 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:41.092970 master-0 kubenswrapper[27819]: E0319 09:33:41.092869 27819 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:41.092970 master-0 kubenswrapper[27819]: E0319 09:33:41.092914 27819 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:41.093200 master-0 kubenswrapper[27819]: E0319 09:33:41.092993 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access podName:98826625-8de0-4bf7-8926-ec62517369e5 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:42.092966856 +0000 UTC m=+7.014544648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access") pod "installer-4-master-0" (UID: "98826625-8de0-4bf7-8926-ec62517369e5") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:41.233313 master-0 kubenswrapper[27819]: I0319 09:33:41.233202 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:41.507095 master-0 kubenswrapper[27819]: I0319 09:33:41.506964 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:41.509438 master-0 kubenswrapper[27819]: I0319 09:33:41.509396 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-57475586f6-pnw8k" Mar 19 09:33:41.638201 master-0 kubenswrapper[27819]: I0319 09:33:41.638144 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"2571226d8523742aea33f013aaa3ffd52ce20043efb4ab6a5ca865d4ff5abc21"} Mar 19 09:33:41.638201 master-0 kubenswrapper[27819]: I0319 09:33:41.638190 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"2af6f18a2cb1822b819673b0062c62c465c3b0562739bf04100286003253f8d5"} Mar 19 09:33:41.638201 master-0 kubenswrapper[27819]: I0319 09:33:41.638203 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"8fc4f0094d10a2c4140d742606773b2781c30ff335145e587e1bcdb639a3e23b"} Mar 19 09:33:41.638448 master-0 kubenswrapper[27819]: I0319 09:33:41.638293 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:41.723558 master-0 kubenswrapper[27819]: I0319 09:33:41.723477 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:41.724418 master-0 kubenswrapper[27819]: I0319 09:33:41.724362 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-7v7bv" Mar 19 09:33:41.746053 master-0 kubenswrapper[27819]: I0319 09:33:41.745995 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:41.886022 master-0 kubenswrapper[27819]: I0319 09:33:41.885959 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:33:41.887119 master-0 kubenswrapper[27819]: I0319 09:33:41.886708 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-lql9l" Mar 19 09:33:41.890380 master-0 kubenswrapper[27819]: I0319 09:33:41.890316 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=6.890301484 podStartE2EDuration="6.890301484s" podCreationTimestamp="2026-03-19 09:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:33:41.885730117 +0000 UTC m=+6.807307809" watchObservedRunningTime="2026-03-19 09:33:41.890301484 +0000 UTC m=+6.811879176" Mar 19 09:33:42.113288 master-0 kubenswrapper[27819]: I0319 09:33:42.113227 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:42.113597 master-0 kubenswrapper[27819]: E0319 09:33:42.113438 27819 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:42.113597 master-0 kubenswrapper[27819]: E0319 09:33:42.113485 27819 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:42.113597 master-0 kubenswrapper[27819]: E0319 09:33:42.113595 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access podName:98826625-8de0-4bf7-8926-ec62517369e5 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:44.113567247 +0000 UTC m=+9.035144969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access") pod "installer-4-master-0" (UID: "98826625-8de0-4bf7-8926-ec62517369e5") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:42.147454 master-0 kubenswrapper[27819]: I0319 09:33:42.147300 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.14728611 podStartE2EDuration="7.14728611s" podCreationTimestamp="2026-03-19 09:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:33:42.09124878 +0000 UTC m=+7.012826472" watchObservedRunningTime="2026-03-19 09:33:42.14728611 +0000 UTC m=+7.068863802" Mar 19 09:33:42.642685 master-0 kubenswrapper[27819]: I0319 09:33:42.642608 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:42.643377 master-0 kubenswrapper[27819]: I0319 09:33:42.642620 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:42.953509 master-0 kubenswrapper[27819]: I0319 09:33:42.953376 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:33:42.957239 master-0 kubenswrapper[27819]: I0319 09:33:42.957193 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-fngzd" Mar 19 09:33:43.048747 master-0 kubenswrapper[27819]: I0319 09:33:43.048680 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=3.048663063 podStartE2EDuration="3.048663063s" podCreationTimestamp="2026-03-19 09:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:33:43.048493258 +0000 UTC m=+7.970070950" watchObservedRunningTime="2026-03-19 09:33:43.048663063 +0000 UTC m=+7.970240755" Mar 19 09:33:43.169063 master-0 kubenswrapper[27819]: I0319 09:33:43.168923 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.168903347 podStartE2EDuration="3.168903347s" podCreationTimestamp="2026-03-19 09:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:33:43.167953131 +0000 UTC m=+8.089530823" watchObservedRunningTime="2026-03-19 09:33:43.168903347 +0000 UTC m=+8.090481039" Mar 19 09:33:43.232758 master-0 kubenswrapper[27819]: I0319 09:33:43.232626 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:43.550747 master-0 kubenswrapper[27819]: I0319 09:33:43.550576 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:43.577064 master-0 kubenswrapper[27819]: I0319 09:33:43.577004 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:43.581262 master-0 kubenswrapper[27819]: I0319 09:33:43.581216 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:33:43.647853 master-0 kubenswrapper[27819]: I0319 09:33:43.647808 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:43.647853 master-0 kubenswrapper[27819]: I0319 09:33:43.647840 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:43.776967 master-0 kubenswrapper[27819]: I0319 09:33:43.776904 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:33:43.780865 master-0 kubenswrapper[27819]: I0319 09:33:43.780824 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-fts6w" Mar 19 09:33:43.813458 master-0 kubenswrapper[27819]: I0319 09:33:43.813223 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:43.878416 master-0 kubenswrapper[27819]: I0319 09:33:43.878359 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:43.960814 master-0 kubenswrapper[27819]: I0319 09:33:43.960754 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:33:43.974243 master-0 kubenswrapper[27819]: I0319 09:33:43.974205 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-tlmxr" Mar 19 09:33:44.038172 master-0 kubenswrapper[27819]: I0319 09:33:44.038113 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:44.142143 master-0 kubenswrapper[27819]: I0319 09:33:44.142083 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:44.142875 master-0 kubenswrapper[27819]: E0319 09:33:44.142834 27819 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:44.142875 master-0 kubenswrapper[27819]: E0319 09:33:44.142873 27819 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:44.142991 master-0 kubenswrapper[27819]: E0319 09:33:44.142923 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access podName:98826625-8de0-4bf7-8926-ec62517369e5 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:48.14290554 +0000 UTC m=+13.064483232 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access") pod "installer-4-master-0" (UID: "98826625-8de0-4bf7-8926-ec62517369e5") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:44.516629 master-0 kubenswrapper[27819]: I0319 09:33:44.516504 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 19 09:33:44.529434 master-0 kubenswrapper[27819]: I0319 09:33:44.529384 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 19 09:33:44.806574 master-0 kubenswrapper[27819]: I0319 09:33:44.806398 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:45.482513 master-0 kubenswrapper[27819]: I0319 09:33:45.482455 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:45.483482 master-0 kubenswrapper[27819]: I0319 09:33:45.483438 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-stct6" Mar 19 09:33:45.669405 master-0 kubenswrapper[27819]: I0319 09:33:45.669353 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:46.539188 master-0 kubenswrapper[27819]: I0319 09:33:46.539130 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:46.539908 master-0 kubenswrapper[27819]: I0319 09:33:46.539244 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:46.541564 master-0 kubenswrapper[27819]: I0319 09:33:46.541497 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-k99cg" Mar 19 09:33:46.752393 master-0 kubenswrapper[27819]: I0319 09:33:46.752349 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:46.756641 master-0 kubenswrapper[27819]: I0319 09:33:46.756620 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-775788bf78-tgdnw" Mar 19 09:33:46.972076 master-0 kubenswrapper[27819]: I0319 09:33:46.972013 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:47.039129 master-0 kubenswrapper[27819]: I0319 09:33:47.039081 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:33:47.043448 master-0 kubenswrapper[27819]: I0319 09:33:47.043401 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-52j2b" Mar 19 09:33:47.373203 master-0 kubenswrapper[27819]: I0319 09:33:47.373151 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:47.374131 master-0 kubenswrapper[27819]: I0319 09:33:47.374073 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:47.374131 master-0 kubenswrapper[27819]: I0319 09:33:47.374115 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:47.385560 master-0 kubenswrapper[27819]: I0319 09:33:47.384336 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:47.394568 master-0 kubenswrapper[27819]: I0319 09:33:47.391687 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:33:47.424338 master-0 kubenswrapper[27819]: I0319 09:33:47.424270 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:33:47.670218 master-0 kubenswrapper[27819]: I0319 09:33:47.670119 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:33:48.198525 master-0 kubenswrapper[27819]: I0319 09:33:48.198464 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:48.198750 master-0 kubenswrapper[27819]: E0319 09:33:48.198691 27819 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:48.198750 master-0 kubenswrapper[27819]: E0319 09:33:48.198714 27819 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:48.198815 master-0 kubenswrapper[27819]: E0319 09:33:48.198774 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access podName:98826625-8de0-4bf7-8926-ec62517369e5 nodeName:}" failed. No retries permitted until 2026-03-19 09:33:56.198755217 +0000 UTC m=+21.120332909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access") pod "installer-4-master-0" (UID: "98826625-8de0-4bf7-8926-ec62517369e5") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:48.238315 master-0 kubenswrapper[27819]: I0319 09:33:48.238266 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:48.245761 master-0 kubenswrapper[27819]: I0319 09:33:48.245722 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6f6b54748-s5cpx" Mar 19 09:33:48.337726 master-0 kubenswrapper[27819]: I0319 09:33:48.337653 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:48.343391 master-0 kubenswrapper[27819]: I0319 09:33:48.343347 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:49.040313 master-0 kubenswrapper[27819]: I0319 09:33:49.040266 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:49.041514 master-0 kubenswrapper[27819]: I0319 09:33:49.041498 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-rgzxb" Mar 19 09:33:49.414359 master-0 kubenswrapper[27819]: I0319 09:33:49.414289 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:49.459987 master-0 kubenswrapper[27819]: I0319 09:33:49.459931 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:49.560449 master-0 kubenswrapper[27819]: I0319 09:33:49.560393 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:49.920510 master-0 kubenswrapper[27819]: I0319 09:33:49.920433 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:49.958340 master-0 kubenswrapper[27819]: I0319 09:33:49.958292 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l26xf" Mar 19 09:33:50.975555 master-0 kubenswrapper[27819]: I0319 09:33:50.975504 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:51.026557 master-0 kubenswrapper[27819]: I0319 09:33:51.026494 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7cczg" Mar 19 09:33:51.270636 master-0 kubenswrapper[27819]: I0319 09:33:51.270464 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:51.308795 master-0 kubenswrapper[27819]: I0319 09:33:51.308751 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-887wl" Mar 19 09:33:52.652108 master-0 kubenswrapper[27819]: I0319 09:33:52.652054 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:33:54.075285 master-0 kubenswrapper[27819]: I0319 09:33:54.075226 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:54.137230 master-0 kubenswrapper[27819]: I0319 09:33:54.137183 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brpbp" Mar 19 09:33:55.674620 master-0 kubenswrapper[27819]: I0319 09:33:55.674533 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:33:56.209184 master-0 kubenswrapper[27819]: I0319 09:33:56.209130 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:33:56.209399 master-0 kubenswrapper[27819]: E0319 09:33:56.209301 27819 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:56.209399 master-0 kubenswrapper[27819]: E0319 09:33:56.209322 27819 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:56.209399 master-0 kubenswrapper[27819]: E0319 09:33:56.209371 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access podName:98826625-8de0-4bf7-8926-ec62517369e5 nodeName:}" failed. No retries permitted until 2026-03-19 09:34:12.20935559 +0000 UTC m=+37.130933282 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access") pod "installer-4-master-0" (UID: "98826625-8de0-4bf7-8926-ec62517369e5") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:33:57.811657 master-0 kubenswrapper[27819]: I0319 09:33:57.811592 27819 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:33:57.812302 master-0 kubenswrapper[27819]: I0319 09:33:57.811837 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" containerID="cri-o://119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578" gracePeriod=5 Mar 19 09:34:02.651203 master-0 kubenswrapper[27819]: I0319 09:34:02.651114 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:34:02.651893 master-0 kubenswrapper[27819]: I0319 09:34:02.651317 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:34:02.675875 master-0 kubenswrapper[27819]: I0319 09:34:02.675738 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zmrpw" Mar 19 09:34:03.398613 master-0 kubenswrapper[27819]: I0319 09:34:03.398567 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 19 09:34:03.398837 master-0 kubenswrapper[27819]: I0319 09:34:03.398631 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:34:03.506927 master-0 kubenswrapper[27819]: I0319 09:34:03.506875 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 09:34:03.506927 master-0 kubenswrapper[27819]: I0319 09:34:03.506916 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 09:34:03.507251 master-0 kubenswrapper[27819]: I0319 09:34:03.506969 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 09:34:03.507251 master-0 kubenswrapper[27819]: I0319 09:34:03.507042 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock" (OuterVolumeSpecName: "var-lock") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:34:03.507251 master-0 kubenswrapper[27819]: I0319 09:34:03.507042 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log" (OuterVolumeSpecName: "var-log") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:34:03.507251 master-0 kubenswrapper[27819]: I0319 09:34:03.507212 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 09:34:03.507251 master-0 kubenswrapper[27819]: I0319 09:34:03.507253 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 19 09:34:03.507574 master-0 kubenswrapper[27819]: I0319 09:34:03.507335 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:34:03.507574 master-0 kubenswrapper[27819]: I0319 09:34:03.507398 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests" (OuterVolumeSpecName: "manifests") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:34:03.507744 master-0 kubenswrapper[27819]: I0319 09:34:03.507649 27819 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:34:03.507744 master-0 kubenswrapper[27819]: I0319 09:34:03.507664 27819 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") on node \"master-0\" DevicePath \"\"" Mar 19 09:34:03.507744 master-0 kubenswrapper[27819]: I0319 09:34:03.507675 27819 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") on node \"master-0\" DevicePath \"\"" Mar 19 09:34:03.507744 master-0 kubenswrapper[27819]: I0319 09:34:03.507685 27819 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:34:03.512622 master-0 kubenswrapper[27819]: I0319 09:34:03.512559 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:34:03.609429 master-0 kubenswrapper[27819]: I0319 09:34:03.609354 27819 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:34:03.781417 master-0 kubenswrapper[27819]: I0319 09:34:03.781362 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 19 09:34:03.781976 master-0 kubenswrapper[27819]: I0319 09:34:03.781414 27819 generic.go:334] "Generic (PLEG): container finished" podID="16fb4ea7f83036d9c6adf3454fc7e9db" containerID="119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578" exitCode=137 Mar 19 09:34:03.781976 master-0 kubenswrapper[27819]: I0319 09:34:03.781480 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:34:03.781976 master-0 kubenswrapper[27819]: I0319 09:34:03.781515 27819 scope.go:117] "RemoveContainer" containerID="119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578" Mar 19 09:34:03.802757 master-0 kubenswrapper[27819]: I0319 09:34:03.802711 27819 scope.go:117] "RemoveContainer" containerID="119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578" Mar 19 09:34:03.803317 master-0 kubenswrapper[27819]: E0319 09:34:03.803251 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578\": container with ID starting with 119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578 not found: ID does not exist" containerID="119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578" Mar 19 09:34:03.803387 master-0 kubenswrapper[27819]: I0319 09:34:03.803322 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578"} err="failed to get container status \"119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578\": rpc error: code = NotFound desc = could not find container \"119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578\": container with ID starting with 119cff75d901b1c0d2049e4c4e425c4207d975e74defdc818e4b452533d16578 not found: ID does not exist" Mar 19 09:34:03.821955 master-0 kubenswrapper[27819]: I0319 09:34:03.821857 27819 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c0905219-67dd-49eb-b476-0798172dbfb7" Mar 19 09:34:05.299711 master-0 kubenswrapper[27819]: I0319 09:34:05.299652 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" path="/var/lib/kubelet/pods/16fb4ea7f83036d9c6adf3454fc7e9db/volumes" Mar 19 09:34:05.300271 master-0 kubenswrapper[27819]: I0319 09:34:05.299965 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 19 09:34:05.321561 master-0 kubenswrapper[27819]: I0319 09:34:05.318197 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:34:05.321561 master-0 kubenswrapper[27819]: I0319 09:34:05.318261 27819 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c0905219-67dd-49eb-b476-0798172dbfb7" Mar 19 09:34:05.321561 master-0 kubenswrapper[27819]: I0319 09:34:05.321224 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:34:05.321561 master-0 kubenswrapper[27819]: I0319 09:34:05.321263 27819 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="c0905219-67dd-49eb-b476-0798172dbfb7" Mar 19 09:34:08.348802 master-0 kubenswrapper[27819]: I0319 09:34:08.348737 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:34:08.354208 master-0 kubenswrapper[27819]: I0319 09:34:08.354129 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:34:12.223502 master-0 kubenswrapper[27819]: I0319 09:34:12.223376 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:34:12.224733 master-0 kubenswrapper[27819]: E0319 09:34:12.223618 27819 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:34:12.224733 master-0 kubenswrapper[27819]: E0319 09:34:12.223640 27819 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:34:12.224733 master-0 kubenswrapper[27819]: E0319 09:34:12.223685 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access podName:98826625-8de0-4bf7-8926-ec62517369e5 nodeName:}" failed. No retries permitted until 2026-03-19 09:34:44.223669603 +0000 UTC m=+69.145247295 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access") pod "installer-4-master-0" (UID: "98826625-8de0-4bf7-8926-ec62517369e5") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 19 09:34:29.144731 master-0 kubenswrapper[27819]: I0319 09:34:29.144669 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.144900 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.144911 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.144938 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.144944 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.144957 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.144963 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.144971 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.144979 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.144993 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145001 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145020 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145030 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145047 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145055 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145067 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145073 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145091 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145099 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145112 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145120 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145135 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145143 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145152 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145159 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145169 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec98e408-a574-40eb-b84d-111edbaab81a" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145175 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec98e408-a574-40eb-b84d-111edbaab81a" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145189 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145195 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145210 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9969717-8350-416e-8711-877cdf557d81" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145215 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9969717-8350-416e-8711-877cdf557d81" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145268 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145275 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145289 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145295 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145303 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145309 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145317 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5780efa-c56a-4953-807f-6a51efc91b09" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145324 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5780efa-c56a-4953-807f-6a51efc91b09" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145331 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145337 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145347 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ab0802-da8a-475c-a707-09f7838f580b" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145352 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ab0802-da8a-475c-a707-09f7838f580b" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: E0319 09:34:29.145365 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98826625-8de0-4bf7-8926-ec62517369e5" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145371 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="98826625-8de0-4bf7-8926-ec62517369e5" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145467 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="kube-controller-manager" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145495 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145511 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="47ea8c5d-8db7-44a5-bdfd-e12d3ac1d26c" containerName="assisted-installer-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145528 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="67658b93f6f5927402b87ec35623e46e" containerName="cluster-policy-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145557 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9969717-8350-416e-8711-877cdf557d81" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145566 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145579 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ab0802-da8a-475c-a707-09f7838f580b" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145595 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145609 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145617 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145632 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5780efa-c56a-4953-807f-6a51efc91b09" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145643 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ce846a-f7ca-4f96-9bb4-509d084dcec1" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145655 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145667 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ba861f-a073-4d60-9136-041c2e98dd0f" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145674 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="98826625-8de0-4bf7-8926-ec62517369e5" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145684 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145695 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20d34ff-5b2a-4142-802f-57a7a38c5a12" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145706 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04277ae-5881-4ce1-9157-d58f93a5f116" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145718 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec98e408-a574-40eb-b84d-111edbaab81a" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145724 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe4e3a0b-973b-4534-b91c-1e870e4e5c32" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145736 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="925be58b-a4e2-448b-afb4-4b4d689ae64c" containerName="installer" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.145748 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 19 09:34:29.146059 master-0 kubenswrapper[27819]: I0319 09:34:29.146095 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.151779 master-0 kubenswrapper[27819]: I0319 09:34:29.149892 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-j48rl" Mar 19 09:34:29.153394 master-0 kubenswrapper[27819]: I0319 09:34:29.153335 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 19 09:34:29.167169 master-0 kubenswrapper[27819]: I0319 09:34:29.167081 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 09:34:29.252045 master-0 kubenswrapper[27819]: I0319 09:34:29.251966 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.252301 master-0 kubenswrapper[27819]: I0319 09:34:29.252063 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af509837-5ce1-4863-a9c3-5d9a79828994-kube-api-access\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.252301 master-0 kubenswrapper[27819]: I0319 09:34:29.252110 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-var-lock\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.353055 master-0 kubenswrapper[27819]: I0319 09:34:29.353004 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.353297 master-0 kubenswrapper[27819]: I0319 09:34:29.353096 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af509837-5ce1-4863-a9c3-5d9a79828994-kube-api-access\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.353297 master-0 kubenswrapper[27819]: I0319 09:34:29.353128 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.353297 master-0 kubenswrapper[27819]: I0319 09:34:29.353154 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-var-lock\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.353297 master-0 kubenswrapper[27819]: I0319 09:34:29.353238 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-var-lock\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.365501 master-0 kubenswrapper[27819]: I0319 09:34:29.365452 27819 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 19 09:34:29.367967 master-0 kubenswrapper[27819]: I0319 09:34:29.367928 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af509837-5ce1-4863-a9c3-5d9a79828994-kube-api-access\") pod \"installer-5-master-0\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.471074 master-0 kubenswrapper[27819]: I0319 09:34:29.470924 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:34:29.867878 master-0 kubenswrapper[27819]: I0319 09:34:29.867801 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 09:34:29.875843 master-0 kubenswrapper[27819]: W0319 09:34:29.875772 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaf509837_5ce1_4863_a9c3_5d9a79828994.slice/crio-84c902f78de721abbe35454b7e7419cc3fd8d5af955b74df421b024a3fa00043 WatchSource:0}: Error finding container 84c902f78de721abbe35454b7e7419cc3fd8d5af955b74df421b024a3fa00043: Status 404 returned error can't find the container with id 84c902f78de721abbe35454b7e7419cc3fd8d5af955b74df421b024a3fa00043 Mar 19 09:34:29.957891 master-0 kubenswrapper[27819]: I0319 09:34:29.957836 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"af509837-5ce1-4863-a9c3-5d9a79828994","Type":"ContainerStarted","Data":"84c902f78de721abbe35454b7e7419cc3fd8d5af955b74df421b024a3fa00043"} Mar 19 09:34:30.973530 master-0 kubenswrapper[27819]: I0319 09:34:30.973477 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"af509837-5ce1-4863-a9c3-5d9a79828994","Type":"ContainerStarted","Data":"0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8"} Mar 19 09:34:30.992903 master-0 kubenswrapper[27819]: I0319 09:34:30.992809 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.992792208 podStartE2EDuration="1.992792208s" podCreationTimestamp="2026-03-19 09:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:34:30.989443345 +0000 UTC m=+55.911021067" watchObservedRunningTime="2026-03-19 09:34:30.992792208 +0000 UTC m=+55.914369910" Mar 19 09:34:33.585009 master-0 kubenswrapper[27819]: I0319 09:34:33.584942 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 19 09:34:35.259256 master-0 kubenswrapper[27819]: I0319 09:34:35.259221 27819 scope.go:117] "RemoveContainer" containerID="e6c6a6b2ffdb2a6ceaac069cb1bbfd1fd6ab268976108284249a62d330f8ad4e" Mar 19 09:34:35.281403 master-0 kubenswrapper[27819]: I0319 09:34:35.281363 27819 scope.go:117] "RemoveContainer" containerID="4ff4b935126cc5d750c1d850d7bd8bc2f70fd6fa92c703e7c39a069db8572af3" Mar 19 09:34:35.298991 master-0 kubenswrapper[27819]: I0319 09:34:35.298960 27819 scope.go:117] "RemoveContainer" containerID="33fbab3dae4d95c59279d28953be3dee55bacb9a970231a9a8855ae0fd8f5ddd" Mar 19 09:34:35.323442 master-0 kubenswrapper[27819]: I0319 09:34:35.323407 27819 scope.go:117] "RemoveContainer" containerID="00add47a2cdec59c3ac383946429a4dc013519a6933bbb0d7ebdd58eb0eb7186" Mar 19 09:34:35.342102 master-0 kubenswrapper[27819]: I0319 09:34:35.342067 27819 scope.go:117] "RemoveContainer" containerID="95ac7f362ef5d31be76e509ce342250794db8fc83ad49a811e1f5659d7238a79" Mar 19 09:34:35.358016 master-0 kubenswrapper[27819]: I0319 09:34:35.357978 27819 scope.go:117] "RemoveContainer" containerID="48d42851ba5e1a1222e1f2eb24f68210235c910ac77423fe9def29b71929e2f4" Mar 19 09:34:35.371923 master-0 kubenswrapper[27819]: I0319 09:34:35.371875 27819 scope.go:117] "RemoveContainer" containerID="0ea74be9ce6a8db82cc76cb8b1abbace62eee2a97494f9a8b0c0af4311285f49" Mar 19 09:34:41.942220 master-0 kubenswrapper[27819]: I0319 09:34:41.942127 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 09:34:41.943155 master-0 kubenswrapper[27819]: I0319 09:34:41.942360 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="af509837-5ce1-4863-a9c3-5d9a79828994" containerName="installer" containerID="cri-o://0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8" gracePeriod=30 Mar 19 09:34:44.257011 master-0 kubenswrapper[27819]: I0319 09:34:44.256933 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:34:44.260568 master-0 kubenswrapper[27819]: I0319 09:34:44.260498 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 19 09:34:44.357569 master-0 kubenswrapper[27819]: I0319 09:34:44.357512 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") pod \"98826625-8de0-4bf7-8926-ec62517369e5\" (UID: \"98826625-8de0-4bf7-8926-ec62517369e5\") " Mar 19 09:34:44.361039 master-0 kubenswrapper[27819]: I0319 09:34:44.360982 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98826625-8de0-4bf7-8926-ec62517369e5" (UID: "98826625-8de0-4bf7-8926-ec62517369e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:34:44.459131 master-0 kubenswrapper[27819]: I0319 09:34:44.459079 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98826625-8de0-4bf7-8926-ec62517369e5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:34:45.150377 master-0 kubenswrapper[27819]: I0319 09:34:45.150297 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 19 09:34:45.151776 master-0 kubenswrapper[27819]: I0319 09:34:45.151738 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.161488 master-0 kubenswrapper[27819]: I0319 09:34:45.161426 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 19 09:34:45.167467 master-0 kubenswrapper[27819]: I0319 09:34:45.167412 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-var-lock\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.167800 master-0 kubenswrapper[27819]: I0319 09:34:45.167760 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.167925 master-0 kubenswrapper[27819]: I0319 09:34:45.167887 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72f73c81-e455-430a-9cb7-c11a61d977ad-kube-api-access\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.268812 master-0 kubenswrapper[27819]: I0319 09:34:45.268763 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.269377 master-0 kubenswrapper[27819]: I0319 09:34:45.268932 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.269377 master-0 kubenswrapper[27819]: I0319 09:34:45.269005 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72f73c81-e455-430a-9cb7-c11a61d977ad-kube-api-access\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.269377 master-0 kubenswrapper[27819]: I0319 09:34:45.269191 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-var-lock\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.269377 master-0 kubenswrapper[27819]: I0319 09:34:45.269254 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-var-lock\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.292851 master-0 kubenswrapper[27819]: I0319 09:34:45.292774 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72f73c81-e455-430a-9cb7-c11a61d977ad-kube-api-access\") pod \"installer-6-master-0\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.478092 master-0 kubenswrapper[27819]: I0319 09:34:45.477970 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:34:45.931034 master-0 kubenswrapper[27819]: I0319 09:34:45.930989 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 19 09:34:45.936156 master-0 kubenswrapper[27819]: W0319 09:34:45.936121 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod72f73c81_e455_430a_9cb7_c11a61d977ad.slice/crio-bedd82ea706418e76e83dd689f44782a3eae5aa2b6d319fa4b49e11d41477dcd WatchSource:0}: Error finding container bedd82ea706418e76e83dd689f44782a3eae5aa2b6d319fa4b49e11d41477dcd: Status 404 returned error can't find the container with id bedd82ea706418e76e83dd689f44782a3eae5aa2b6d319fa4b49e11d41477dcd Mar 19 09:34:46.089114 master-0 kubenswrapper[27819]: I0319 09:34:46.089063 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"72f73c81-e455-430a-9cb7-c11a61d977ad","Type":"ContainerStarted","Data":"bedd82ea706418e76e83dd689f44782a3eae5aa2b6d319fa4b49e11d41477dcd"} Mar 19 09:34:47.097640 master-0 kubenswrapper[27819]: I0319 09:34:47.097581 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"72f73c81-e455-430a-9cb7-c11a61d977ad","Type":"ContainerStarted","Data":"0edc4db64b2ea3716c2f9d86207bbcb556500e233e61f298fea9752a9b6a5518"} Mar 19 09:34:47.113866 master-0 kubenswrapper[27819]: I0319 09:34:47.113763 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=2.113734976 podStartE2EDuration="2.113734976s" podCreationTimestamp="2026-03-19 09:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:34:47.112620996 +0000 UTC m=+72.034198688" watchObservedRunningTime="2026-03-19 09:34:47.113734976 +0000 UTC m=+72.035312688" Mar 19 09:35:01.157103 master-0 kubenswrapper[27819]: I0319 09:35:01.157063 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_af509837-5ce1-4863-a9c3-5d9a79828994/installer/0.log" Mar 19 09:35:01.157643 master-0 kubenswrapper[27819]: I0319 09:35:01.157131 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:35:01.203516 master-0 kubenswrapper[27819]: I0319 09:35:01.202937 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_af509837-5ce1-4863-a9c3-5d9a79828994/installer/0.log" Mar 19 09:35:01.203516 master-0 kubenswrapper[27819]: I0319 09:35:01.202978 27819 generic.go:334] "Generic (PLEG): container finished" podID="af509837-5ce1-4863-a9c3-5d9a79828994" containerID="0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8" exitCode=1 Mar 19 09:35:01.203516 master-0 kubenswrapper[27819]: I0319 09:35:01.203005 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"af509837-5ce1-4863-a9c3-5d9a79828994","Type":"ContainerDied","Data":"0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8"} Mar 19 09:35:01.203516 master-0 kubenswrapper[27819]: I0319 09:35:01.203028 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"af509837-5ce1-4863-a9c3-5d9a79828994","Type":"ContainerDied","Data":"84c902f78de721abbe35454b7e7419cc3fd8d5af955b74df421b024a3fa00043"} Mar 19 09:35:01.203516 master-0 kubenswrapper[27819]: I0319 09:35:01.203044 27819 scope.go:117] "RemoveContainer" containerID="0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8" Mar 19 09:35:01.203516 master-0 kubenswrapper[27819]: I0319 09:35:01.203117 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 19 09:35:01.227281 master-0 kubenswrapper[27819]: I0319 09:35:01.227246 27819 scope.go:117] "RemoveContainer" containerID="0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8" Mar 19 09:35:01.227849 master-0 kubenswrapper[27819]: E0319 09:35:01.227782 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8\": container with ID starting with 0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8 not found: ID does not exist" containerID="0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8" Mar 19 09:35:01.227849 master-0 kubenswrapper[27819]: I0319 09:35:01.227823 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8"} err="failed to get container status \"0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8\": rpc error: code = NotFound desc = could not find container \"0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8\": container with ID starting with 0e5b589bbb3e784b76f40588a356c896b0fd3da42edba0f1c467b2e2548ac0c8 not found: ID does not exist" Mar 19 09:35:01.274537 master-0 kubenswrapper[27819]: I0319 09:35:01.274504 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-kubelet-dir\") pod \"af509837-5ce1-4863-a9c3-5d9a79828994\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " Mar 19 09:35:01.274846 master-0 kubenswrapper[27819]: I0319 09:35:01.274815 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af509837-5ce1-4863-a9c3-5d9a79828994-kube-api-access\") pod \"af509837-5ce1-4863-a9c3-5d9a79828994\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " Mar 19 09:35:01.274948 master-0 kubenswrapper[27819]: I0319 09:35:01.274937 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-var-lock\") pod \"af509837-5ce1-4863-a9c3-5d9a79828994\" (UID: \"af509837-5ce1-4863-a9c3-5d9a79828994\") " Mar 19 09:35:01.275367 master-0 kubenswrapper[27819]: I0319 09:35:01.275351 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-var-lock" (OuterVolumeSpecName: "var-lock") pod "af509837-5ce1-4863-a9c3-5d9a79828994" (UID: "af509837-5ce1-4863-a9c3-5d9a79828994"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:35:01.275467 master-0 kubenswrapper[27819]: I0319 09:35:01.275456 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "af509837-5ce1-4863-a9c3-5d9a79828994" (UID: "af509837-5ce1-4863-a9c3-5d9a79828994"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:35:01.278706 master-0 kubenswrapper[27819]: I0319 09:35:01.278673 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af509837-5ce1-4863-a9c3-5d9a79828994-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "af509837-5ce1-4863-a9c3-5d9a79828994" (UID: "af509837-5ce1-4863-a9c3-5d9a79828994"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:35:01.378362 master-0 kubenswrapper[27819]: I0319 09:35:01.377311 27819 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:01.378362 master-0 kubenswrapper[27819]: I0319 09:35:01.377357 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af509837-5ce1-4863-a9c3-5d9a79828994-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:01.378362 master-0 kubenswrapper[27819]: I0319 09:35:01.377368 27819 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/af509837-5ce1-4863-a9c3-5d9a79828994-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:01.528822 master-0 kubenswrapper[27819]: I0319 09:35:01.528751 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 09:35:01.538313 master-0 kubenswrapper[27819]: I0319 09:35:01.538255 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 19 09:35:03.293707 master-0 kubenswrapper[27819]: I0319 09:35:03.293636 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af509837-5ce1-4863-a9c3-5d9a79828994" path="/var/lib/kubelet/pods/af509837-5ce1-4863-a9c3-5d9a79828994/volumes" Mar 19 09:35:44.051055 master-0 kubenswrapper[27819]: I0319 09:35:44.050807 27819 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:35:44.052111 master-0 kubenswrapper[27819]: E0319 09:35:44.051287 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af509837-5ce1-4863-a9c3-5d9a79828994" containerName="installer" Mar 19 09:35:44.052111 master-0 kubenswrapper[27819]: I0319 09:35:44.051299 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="af509837-5ce1-4863-a9c3-5d9a79828994" containerName="installer" Mar 19 09:35:44.052111 master-0 kubenswrapper[27819]: I0319 09:35:44.051430 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="af509837-5ce1-4863-a9c3-5d9a79828994" containerName="installer" Mar 19 09:35:44.052111 master-0 kubenswrapper[27819]: I0319 09:35:44.051773 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.053357 master-0 kubenswrapper[27819]: I0319 09:35:44.053306 27819 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 09:35:44.053876 master-0 kubenswrapper[27819]: I0319 09:35:44.053794 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" containerID="cri-o://250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573" gracePeriod=15 Mar 19 09:35:44.053997 master-0 kubenswrapper[27819]: I0319 09:35:44.053876 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d" gracePeriod=15 Mar 19 09:35:44.053997 master-0 kubenswrapper[27819]: I0319 09:35:44.053960 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5" gracePeriod=15 Mar 19 09:35:44.054130 master-0 kubenswrapper[27819]: I0319 09:35:44.053993 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45" gracePeriod=15 Mar 19 09:35:44.054130 master-0 kubenswrapper[27819]: I0319 09:35:44.054056 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1" gracePeriod=15 Mar 19 09:35:44.055661 master-0 kubenswrapper[27819]: I0319 09:35:44.055323 27819 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 09:35:44.055661 master-0 kubenswrapper[27819]: E0319 09:35:44.055645 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 19 09:35:44.055661 master-0 kubenswrapper[27819]: I0319 09:35:44.055659 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: E0319 09:35:44.055673 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: I0319 09:35:44.055682 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: E0319 09:35:44.055701 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: I0319 09:35:44.055729 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: E0319 09:35:44.055741 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: I0319 09:35:44.055747 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: E0319 09:35:44.055759 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: I0319 09:35:44.055764 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: E0319 09:35:44.055775 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: I0319 09:35:44.055781 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: E0319 09:35:44.055809 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 09:35:44.055834 master-0 kubenswrapper[27819]: I0319 09:35:44.055816 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 09:35:44.056451 master-0 kubenswrapper[27819]: I0319 09:35:44.056012 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 09:35:44.056451 master-0 kubenswrapper[27819]: I0319 09:35:44.056067 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 19 09:35:44.056451 master-0 kubenswrapper[27819]: I0319 09:35:44.056081 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 19 09:35:44.056451 master-0 kubenswrapper[27819]: I0319 09:35:44.056096 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 19 09:35:44.056451 master-0 kubenswrapper[27819]: I0319 09:35:44.056104 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 19 09:35:44.056451 master-0 kubenswrapper[27819]: I0319 09:35:44.056112 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 19 09:35:44.107095 master-0 kubenswrapper[27819]: I0319 09:35:44.107039 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.163862 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.163918 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.163947 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.163973 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.163996 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.164036 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.164089 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.164740 master-0 kubenswrapper[27819]: I0319 09:35:44.164122 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.265906 master-0 kubenswrapper[27819]: I0319 09:35:44.265837 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.266102 master-0 kubenswrapper[27819]: I0319 09:35:44.265950 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.266102 master-0 kubenswrapper[27819]: I0319 09:35:44.265983 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266102 master-0 kubenswrapper[27819]: I0319 09:35:44.266034 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266102 master-0 kubenswrapper[27819]: I0319 09:35:44.266057 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266102 master-0 kubenswrapper[27819]: I0319 09:35:44.266094 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266252 master-0 kubenswrapper[27819]: I0319 09:35:44.266115 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.266252 master-0 kubenswrapper[27819]: I0319 09:35:44.266138 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.266252 master-0 kubenswrapper[27819]: I0319 09:35:44.266202 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.266252 master-0 kubenswrapper[27819]: I0319 09:35:44.266214 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266368 master-0 kubenswrapper[27819]: I0319 09:35:44.266275 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266368 master-0 kubenswrapper[27819]: I0319 09:35:44.266300 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:44.266368 master-0 kubenswrapper[27819]: I0319 09:35:44.266298 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266368 master-0 kubenswrapper[27819]: I0319 09:35:44.266359 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266499 master-0 kubenswrapper[27819]: I0319 09:35:44.266391 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.266499 master-0 kubenswrapper[27819]: I0319 09:35:44.266466 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.405658 master-0 kubenswrapper[27819]: I0319 09:35:44.405510 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:35:44.865649 master-0 kubenswrapper[27819]: I0319 09:35:44.865592 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-check-endpoints/0.log" Mar 19 09:35:44.870763 master-0 kubenswrapper[27819]: I0319 09:35:44.870727 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 19 09:35:44.871748 master-0 kubenswrapper[27819]: I0319 09:35:44.871717 27819 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d" exitCode=0 Mar 19 09:35:44.871832 master-0 kubenswrapper[27819]: I0319 09:35:44.871819 27819 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5" exitCode=0 Mar 19 09:35:44.871903 master-0 kubenswrapper[27819]: I0319 09:35:44.871891 27819 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45" exitCode=0 Mar 19 09:35:44.871993 master-0 kubenswrapper[27819]: I0319 09:35:44.871851 27819 scope.go:117] "RemoveContainer" containerID="756374cfad040ab2f111ee5526fff718384e34314b3022f03afd3502143ed50c" Mar 19 09:35:44.872203 master-0 kubenswrapper[27819]: I0319 09:35:44.871975 27819 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1" exitCode=2 Mar 19 09:35:44.874375 master-0 kubenswrapper[27819]: I0319 09:35:44.874354 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"efa84227f653a981cbac9a45ac278327aaa9aa4a65f7ec07cb25ef705470a4fa"} Mar 19 09:35:44.874469 master-0 kubenswrapper[27819]: I0319 09:35:44.874456 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"aaa77867bc4266c02b6a3523591f542670be2887d688cc4c29b38059bb099fd1"} Mar 19 09:35:45.883098 master-0 kubenswrapper[27819]: I0319 09:35:45.883071 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 19 09:35:47.972022 master-0 kubenswrapper[27819]: E0319 09:35:47.971927 27819 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:47.973409 master-0 kubenswrapper[27819]: E0319 09:35:47.973331 27819 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:47.974159 master-0 kubenswrapper[27819]: E0319 09:35:47.974103 27819 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:47.974840 master-0 kubenswrapper[27819]: E0319 09:35:47.974768 27819 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:47.975754 master-0 kubenswrapper[27819]: E0319 09:35:47.975661 27819 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:47.975754 master-0 kubenswrapper[27819]: I0319 09:35:47.975735 27819 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 19 09:35:47.976623 master-0 kubenswrapper[27819]: E0319 09:35:47.976483 27819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 19 09:35:48.178506 master-0 kubenswrapper[27819]: E0319 09:35:48.178431 27819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 19 09:35:48.580150 master-0 kubenswrapper[27819]: E0319 09:35:48.580047 27819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 19 09:35:49.118871 master-0 kubenswrapper[27819]: E0319 09:35:49.118676 27819 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189e346c8ccf13ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:7d5ce05b3d592e63f1f92202d52b9635,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:35:44.054047674 +0000 UTC m=+128.975625376,LastTimestamp:2026-03-19 09:35:44.054047674 +0000 UTC m=+128.975625376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:35:49.381983 master-0 kubenswrapper[27819]: E0319 09:35:49.381848 27819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 19 09:35:49.859911 master-0 kubenswrapper[27819]: I0319 09:35:49.859859 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 19 09:35:49.860720 master-0 kubenswrapper[27819]: I0319 09:35:49.860690 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:49.879861 master-0 kubenswrapper[27819]: I0319 09:35:49.879706 27819 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.880411 master-0 kubenswrapper[27819]: I0319 09:35:49.880371 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.884493 master-0 kubenswrapper[27819]: I0319 09:35:49.884437 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.884926 master-0 kubenswrapper[27819]: I0319 09:35:49.884883 27819 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.885399 master-0 kubenswrapper[27819]: I0319 09:35:49.885364 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.885929 master-0 kubenswrapper[27819]: I0319 09:35:49.885883 27819 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.916566 master-0 kubenswrapper[27819]: I0319 09:35:49.916479 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 19 09:35:49.917484 master-0 kubenswrapper[27819]: I0319 09:35:49.917436 27819 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573" exitCode=0 Mar 19 09:35:49.917811 master-0 kubenswrapper[27819]: I0319 09:35:49.917526 27819 scope.go:117] "RemoveContainer" containerID="3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d" Mar 19 09:35:49.917811 master-0 kubenswrapper[27819]: I0319 09:35:49.917540 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:49.919182 master-0 kubenswrapper[27819]: I0319 09:35:49.919140 27819 generic.go:334] "Generic (PLEG): container finished" podID="72f73c81-e455-430a-9cb7-c11a61d977ad" containerID="0edc4db64b2ea3716c2f9d86207bbcb556500e233e61f298fea9752a9b6a5518" exitCode=0 Mar 19 09:35:49.919224 master-0 kubenswrapper[27819]: I0319 09:35:49.919187 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"72f73c81-e455-430a-9cb7-c11a61d977ad","Type":"ContainerDied","Data":"0edc4db64b2ea3716c2f9d86207bbcb556500e233e61f298fea9752a9b6a5518"} Mar 19 09:35:49.920557 master-0 kubenswrapper[27819]: I0319 09:35:49.920515 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.921137 master-0 kubenswrapper[27819]: I0319 09:35:49.921080 27819 status_manager.go:851] "Failed to get status for pod" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.921830 master-0 kubenswrapper[27819]: I0319 09:35:49.921787 27819 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:49.931162 master-0 kubenswrapper[27819]: I0319 09:35:49.931133 27819 scope.go:117] "RemoveContainer" containerID="3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5" Mar 19 09:35:49.940011 master-0 kubenswrapper[27819]: I0319 09:35:49.939966 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 19 09:35:49.940114 master-0 kubenswrapper[27819]: I0319 09:35:49.940084 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 19 09:35:49.940148 master-0 kubenswrapper[27819]: I0319 09:35:49.940115 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 19 09:35:49.940194 master-0 kubenswrapper[27819]: I0319 09:35:49.940101 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:35:49.940230 master-0 kubenswrapper[27819]: I0319 09:35:49.940166 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:35:49.940283 master-0 kubenswrapper[27819]: I0319 09:35:49.940166 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:35:49.940689 master-0 kubenswrapper[27819]: I0319 09:35:49.940654 27819 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:49.940731 master-0 kubenswrapper[27819]: I0319 09:35:49.940689 27819 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:49.940731 master-0 kubenswrapper[27819]: I0319 09:35:49.940710 27819 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:49.943223 master-0 kubenswrapper[27819]: I0319 09:35:49.943182 27819 scope.go:117] "RemoveContainer" containerID="32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45" Mar 19 09:35:49.953668 master-0 kubenswrapper[27819]: I0319 09:35:49.953646 27819 scope.go:117] "RemoveContainer" containerID="b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1" Mar 19 09:35:49.967447 master-0 kubenswrapper[27819]: I0319 09:35:49.967401 27819 scope.go:117] "RemoveContainer" containerID="250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573" Mar 19 09:35:49.987786 master-0 kubenswrapper[27819]: I0319 09:35:49.987742 27819 scope.go:117] "RemoveContainer" containerID="eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244" Mar 19 09:35:50.000760 master-0 kubenswrapper[27819]: I0319 09:35:50.000728 27819 scope.go:117] "RemoveContainer" containerID="3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d" Mar 19 09:35:50.001175 master-0 kubenswrapper[27819]: E0319 09:35:50.001140 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d\": container with ID starting with 3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d not found: ID does not exist" containerID="3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d" Mar 19 09:35:50.001242 master-0 kubenswrapper[27819]: I0319 09:35:50.001175 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d"} err="failed to get container status \"3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d\": rpc error: code = NotFound desc = could not find container \"3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d\": container with ID starting with 3dbd58795e2eeedb26d71f5258e19c1d9b88182ae1d336c7445639c006853f9d not found: ID does not exist" Mar 19 09:35:50.001242 master-0 kubenswrapper[27819]: I0319 09:35:50.001204 27819 scope.go:117] "RemoveContainer" containerID="3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5" Mar 19 09:35:50.001516 master-0 kubenswrapper[27819]: E0319 09:35:50.001486 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5\": container with ID starting with 3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5 not found: ID does not exist" containerID="3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5" Mar 19 09:35:50.001613 master-0 kubenswrapper[27819]: I0319 09:35:50.001510 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5"} err="failed to get container status \"3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5\": rpc error: code = NotFound desc = could not find container \"3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5\": container with ID starting with 3c9c4e6053f71f7328992e209feaa2e7d27d65cfbf92d3d5134de93b9c90c1f5 not found: ID does not exist" Mar 19 09:35:50.001613 master-0 kubenswrapper[27819]: I0319 09:35:50.001529 27819 scope.go:117] "RemoveContainer" containerID="32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45" Mar 19 09:35:50.002004 master-0 kubenswrapper[27819]: E0319 09:35:50.001967 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45\": container with ID starting with 32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45 not found: ID does not exist" containerID="32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45" Mar 19 09:35:50.002137 master-0 kubenswrapper[27819]: I0319 09:35:50.001999 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45"} err="failed to get container status \"32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45\": rpc error: code = NotFound desc = could not find container \"32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45\": container with ID starting with 32083ec7dd6bceaf294142b04266be5f95b7c53f3555b6b6cb87a7cbc15d7c45 not found: ID does not exist" Mar 19 09:35:50.002137 master-0 kubenswrapper[27819]: I0319 09:35:50.002014 27819 scope.go:117] "RemoveContainer" containerID="b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1" Mar 19 09:35:50.002255 master-0 kubenswrapper[27819]: E0319 09:35:50.002229 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1\": container with ID starting with b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1 not found: ID does not exist" containerID="b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1" Mar 19 09:35:50.002310 master-0 kubenswrapper[27819]: I0319 09:35:50.002259 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1"} err="failed to get container status \"b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1\": rpc error: code = NotFound desc = could not find container \"b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1\": container with ID starting with b455f0d4d266a3cab85c3c0fc49dd30a4f3c9393d4be08eb35ab109060ffc9e1 not found: ID does not exist" Mar 19 09:35:50.002310 master-0 kubenswrapper[27819]: I0319 09:35:50.002278 27819 scope.go:117] "RemoveContainer" containerID="250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573" Mar 19 09:35:50.002604 master-0 kubenswrapper[27819]: E0319 09:35:50.002580 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573\": container with ID starting with 250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573 not found: ID does not exist" containerID="250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573" Mar 19 09:35:50.002651 master-0 kubenswrapper[27819]: I0319 09:35:50.002608 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573"} err="failed to get container status \"250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573\": rpc error: code = NotFound desc = could not find container \"250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573\": container with ID starting with 250f46d69cb1bcb159c6c375fcace5f13e599601d8e2a715a666f5b360c1d573 not found: ID does not exist" Mar 19 09:35:50.002651 master-0 kubenswrapper[27819]: I0319 09:35:50.002625 27819 scope.go:117] "RemoveContainer" containerID="eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244" Mar 19 09:35:50.003010 master-0 kubenswrapper[27819]: E0319 09:35:50.002977 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244\": container with ID starting with eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244 not found: ID does not exist" containerID="eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244" Mar 19 09:35:50.003010 master-0 kubenswrapper[27819]: I0319 09:35:50.003002 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244"} err="failed to get container status \"eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244\": rpc error: code = NotFound desc = could not find container \"eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244\": container with ID starting with eb573639ae30a6e64e4f1930adb97669d079ef524374e74ec05f4a37c1f9d244 not found: ID does not exist" Mar 19 09:35:50.241647 master-0 kubenswrapper[27819]: I0319 09:35:50.241555 27819 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:50.242223 master-0 kubenswrapper[27819]: I0319 09:35:50.242192 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:50.242613 master-0 kubenswrapper[27819]: I0319 09:35:50.242575 27819 status_manager.go:851] "Failed to get status for pod" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:50.982736 master-0 kubenswrapper[27819]: E0319 09:35:50.982678 27819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 19 09:35:51.187386 master-0 kubenswrapper[27819]: I0319 09:35:51.187286 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:35:51.188135 master-0 kubenswrapper[27819]: I0319 09:35:51.188036 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:51.188802 master-0 kubenswrapper[27819]: I0319 09:35:51.188750 27819 status_manager.go:851] "Failed to get status for pod" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:51.189603 master-0 kubenswrapper[27819]: I0319 09:35:51.189482 27819 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:51.287375 master-0 kubenswrapper[27819]: I0319 09:35:51.287310 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5ce05b3d592e63f1f92202d52b9635" path="/var/lib/kubelet/pods/7d5ce05b3d592e63f1f92202d52b9635/volumes" Mar 19 09:35:51.356314 master-0 kubenswrapper[27819]: I0319 09:35:51.356251 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-kubelet-dir\") pod \"72f73c81-e455-430a-9cb7-c11a61d977ad\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " Mar 19 09:35:51.356314 master-0 kubenswrapper[27819]: I0319 09:35:51.356311 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72f73c81-e455-430a-9cb7-c11a61d977ad-kube-api-access\") pod \"72f73c81-e455-430a-9cb7-c11a61d977ad\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " Mar 19 09:35:51.356576 master-0 kubenswrapper[27819]: I0319 09:35:51.356406 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-var-lock\") pod \"72f73c81-e455-430a-9cb7-c11a61d977ad\" (UID: \"72f73c81-e455-430a-9cb7-c11a61d977ad\") " Mar 19 09:35:51.356576 master-0 kubenswrapper[27819]: I0319 09:35:51.356426 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72f73c81-e455-430a-9cb7-c11a61d977ad" (UID: "72f73c81-e455-430a-9cb7-c11a61d977ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:35:51.356576 master-0 kubenswrapper[27819]: I0319 09:35:51.356506 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-var-lock" (OuterVolumeSpecName: "var-lock") pod "72f73c81-e455-430a-9cb7-c11a61d977ad" (UID: "72f73c81-e455-430a-9cb7-c11a61d977ad"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:35:51.356890 master-0 kubenswrapper[27819]: I0319 09:35:51.356840 27819 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:51.356890 master-0 kubenswrapper[27819]: I0319 09:35:51.356883 27819 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/72f73c81-e455-430a-9cb7-c11a61d977ad-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:51.358970 master-0 kubenswrapper[27819]: I0319 09:35:51.358915 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f73c81-e455-430a-9cb7-c11a61d977ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72f73c81-e455-430a-9cb7-c11a61d977ad" (UID: "72f73c81-e455-430a-9cb7-c11a61d977ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:35:51.458107 master-0 kubenswrapper[27819]: I0319 09:35:51.457959 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72f73c81-e455-430a-9cb7-c11a61d977ad-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:35:51.939727 master-0 kubenswrapper[27819]: I0319 09:35:51.939631 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"72f73c81-e455-430a-9cb7-c11a61d977ad","Type":"ContainerDied","Data":"bedd82ea706418e76e83dd689f44782a3eae5aa2b6d319fa4b49e11d41477dcd"} Mar 19 09:35:51.939727 master-0 kubenswrapper[27819]: I0319 09:35:51.939710 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedd82ea706418e76e83dd689f44782a3eae5aa2b6d319fa4b49e11d41477dcd" Mar 19 09:35:51.939727 master-0 kubenswrapper[27819]: I0319 09:35:51.939716 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 19 09:35:51.972653 master-0 kubenswrapper[27819]: I0319 09:35:51.972574 27819 status_manager.go:851] "Failed to get status for pod" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:51.973092 master-0 kubenswrapper[27819]: I0319 09:35:51.973053 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:54.184393 master-0 kubenswrapper[27819]: E0319 09:35:54.184346 27819 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 19 09:35:54.585753 master-0 kubenswrapper[27819]: E0319 09:35:54.585475 27819 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189e346c8ccf13ba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:7d5ce05b3d592e63f1f92202d52b9635,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-19 09:35:44.054047674 +0000 UTC m=+128.975625376,LastTimestamp:2026-03-19 09:35:44.054047674 +0000 UTC m=+128.975625376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 19 09:35:55.279820 master-0 kubenswrapper[27819]: I0319 09:35:55.279702 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:55.291460 master-0 kubenswrapper[27819]: I0319 09:35:55.291261 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:55.294045 master-0 kubenswrapper[27819]: I0319 09:35:55.293978 27819 status_manager.go:851] "Failed to get status for pod" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:55.294702 master-0 kubenswrapper[27819]: I0319 09:35:55.294654 27819 status_manager.go:851] "Failed to get status for pod" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:55.295228 master-0 kubenswrapper[27819]: I0319 09:35:55.295170 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:55.314810 master-0 kubenswrapper[27819]: I0319 09:35:55.311305 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:35:55.314810 master-0 kubenswrapper[27819]: I0319 09:35:55.311349 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:35:55.316110 master-0 kubenswrapper[27819]: E0319 09:35:55.316053 27819 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:55.316821 master-0 kubenswrapper[27819]: I0319 09:35:55.316796 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:55.963835 master-0 kubenswrapper[27819]: I0319 09:35:55.963781 27819 generic.go:334] "Generic (PLEG): container finished" podID="274c4bebf95a655851b2cf276fe43ef7" containerID="b848271be27fff7f7fb4d2aa1801065089c8139cb61d7829f020523f59230c7f" exitCode=0 Mar 19 09:35:55.963835 master-0 kubenswrapper[27819]: I0319 09:35:55.963817 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerDied","Data":"b848271be27fff7f7fb4d2aa1801065089c8139cb61d7829f020523f59230c7f"} Mar 19 09:35:55.964072 master-0 kubenswrapper[27819]: I0319 09:35:55.963923 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"16e56b3a34f8c6e3379b22c60edcedb4098e72c8c890733c6b384cfa6e3b3b4e"} Mar 19 09:35:55.964224 master-0 kubenswrapper[27819]: I0319 09:35:55.964197 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:35:55.964224 master-0 kubenswrapper[27819]: I0319 09:35:55.964219 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:35:55.964958 master-0 kubenswrapper[27819]: I0319 09:35:55.964927 27819 status_manager.go:851] "Failed to get status for pod" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:55.965058 master-0 kubenswrapper[27819]: E0319 09:35:55.964939 27819 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:55.965366 master-0 kubenswrapper[27819]: I0319 09:35:55.965328 27819 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 19 09:35:56.999359 master-0 kubenswrapper[27819]: I0319 09:35:56.998134 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"e205af559ffc6c2d78d55df5b70253e7137c95f9a58a8517a2e60952e922a60f"} Mar 19 09:35:56.999359 master-0 kubenswrapper[27819]: I0319 09:35:56.998184 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"e15b94c92381012c36490b2a434a5dbff220f627d808a235097c2524cffa1753"} Mar 19 09:35:56.999359 master-0 kubenswrapper[27819]: I0319 09:35:56.998195 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"f0f209650e642b1a0db243364f8b6cea6b86ee82954a0855647996692dfa787d"} Mar 19 09:35:56.999359 master-0 kubenswrapper[27819]: I0319 09:35:56.998204 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"d7eb8558b1d46434da9fc2c63059273e1e5b97eb4b459723b7c4514d808429d3"} Mar 19 09:35:57.391637 master-0 kubenswrapper[27819]: I0319 09:35:57.391533 27819 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 09:35:57.391867 master-0 kubenswrapper[27819]: I0319 09:35:57.391637 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 09:35:58.006000 master-0 kubenswrapper[27819]: I0319 09:35:58.005828 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager/0.log" Mar 19 09:35:58.006000 master-0 kubenswrapper[27819]: I0319 09:35:58.005886 27819 generic.go:334] "Generic (PLEG): container finished" podID="d3939b09ae7c21557b3dd5ab01349318" containerID="32eb7fb05bc6b163861c244117b54dba57fa4d47af128f0765f0e871f04fa152" exitCode=1 Mar 19 09:35:58.006574 master-0 kubenswrapper[27819]: I0319 09:35:58.005997 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerDied","Data":"32eb7fb05bc6b163861c244117b54dba57fa4d47af128f0765f0e871f04fa152"} Mar 19 09:35:58.006574 master-0 kubenswrapper[27819]: I0319 09:35:58.006485 27819 scope.go:117] "RemoveContainer" containerID="32eb7fb05bc6b163861c244117b54dba57fa4d47af128f0765f0e871f04fa152" Mar 19 09:35:58.009434 master-0 kubenswrapper[27819]: I0319 09:35:58.008992 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"6c18609f80a4ad34e4af18e26de2e4d7519fceb138112ab0761f27d623cbae2d"} Mar 19 09:35:58.009434 master-0 kubenswrapper[27819]: I0319 09:35:58.009197 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:35:58.009434 master-0 kubenswrapper[27819]: I0319 09:35:58.009254 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:35:58.009434 master-0 kubenswrapper[27819]: I0319 09:35:58.009271 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:35:58.154159 master-0 kubenswrapper[27819]: I0319 09:35:58.154109 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:35:59.017791 master-0 kubenswrapper[27819]: I0319 09:35:59.017755 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager/0.log" Mar 19 09:35:59.018374 master-0 kubenswrapper[27819]: I0319 09:35:59.017810 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerStarted","Data":"5be498e28584ea542ce41a8bc159a7ded439c5a053defc650ccce7fc0d099fa0"} Mar 19 09:36:00.316950 master-0 kubenswrapper[27819]: I0319 09:36:00.316894 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:36:00.316950 master-0 kubenswrapper[27819]: I0319 09:36:00.316954 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:36:00.321870 master-0 kubenswrapper[27819]: I0319 09:36:00.321829 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:36:03.027853 master-0 kubenswrapper[27819]: I0319 09:36:03.027802 27819 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:36:03.082569 master-0 kubenswrapper[27819]: I0319 09:36:03.080452 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:36:03.082569 master-0 kubenswrapper[27819]: I0319 09:36:03.080484 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:36:03.088115 master-0 kubenswrapper[27819]: I0319 09:36:03.088065 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:36:04.091351 master-0 kubenswrapper[27819]: I0319 09:36:04.091277 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:36:04.091351 master-0 kubenswrapper[27819]: I0319 09:36:04.091334 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b09e7958-0b1f-4579-b3aa-bce84f5ef88d" Mar 19 09:36:05.302017 master-0 kubenswrapper[27819]: I0319 09:36:05.301949 27819 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="2c6b9f13-3ef1-4270-a63b-99c8ebb47ea7" Mar 19 09:36:05.740475 master-0 kubenswrapper[27819]: I0319 09:36:05.740423 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 19 09:36:05.965628 master-0 kubenswrapper[27819]: I0319 09:36:05.965579 27819 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 19 09:36:06.166914 master-0 kubenswrapper[27819]: I0319 09:36:06.166846 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 19 09:36:06.201480 master-0 kubenswrapper[27819]: I0319 09:36:06.201433 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 19 09:36:06.265940 master-0 kubenswrapper[27819]: I0319 09:36:06.265892 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-vxsrn" Mar 19 09:36:06.289331 master-0 kubenswrapper[27819]: I0319 09:36:06.289281 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 19 09:36:06.295666 master-0 kubenswrapper[27819]: I0319 09:36:06.295612 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 19 09:36:06.509816 master-0 kubenswrapper[27819]: I0319 09:36:06.509674 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 19 09:36:06.634916 master-0 kubenswrapper[27819]: I0319 09:36:06.634839 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 19 09:36:06.886398 master-0 kubenswrapper[27819]: I0319 09:36:06.886363 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 19 09:36:06.954297 master-0 kubenswrapper[27819]: I0319 09:36:06.953775 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 19 09:36:06.993246 master-0 kubenswrapper[27819]: I0319 09:36:06.993154 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 19 09:36:07.002952 master-0 kubenswrapper[27819]: I0319 09:36:07.002881 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 19 09:36:07.195786 master-0 kubenswrapper[27819]: I0319 09:36:07.195660 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 19 09:36:07.303740 master-0 kubenswrapper[27819]: I0319 09:36:07.303504 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 19 09:36:07.381037 master-0 kubenswrapper[27819]: I0319 09:36:07.380960 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:36:07.561915 master-0 kubenswrapper[27819]: I0319 09:36:07.561762 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-gmb5f" Mar 19 09:36:07.563775 master-0 kubenswrapper[27819]: I0319 09:36:07.563723 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 19 09:36:08.126734 master-0 kubenswrapper[27819]: I0319 09:36:08.126672 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 19 09:36:08.154479 master-0 kubenswrapper[27819]: I0319 09:36:08.154419 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:36:08.154914 master-0 kubenswrapper[27819]: I0319 09:36:08.154871 27819 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 09:36:08.154996 master-0 kubenswrapper[27819]: I0319 09:36:08.154959 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 09:36:08.217238 master-0 kubenswrapper[27819]: I0319 09:36:08.217179 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 19 09:36:08.285852 master-0 kubenswrapper[27819]: I0319 09:36:08.285793 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 19 09:36:08.403717 master-0 kubenswrapper[27819]: I0319 09:36:08.403599 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 19 09:36:08.581963 master-0 kubenswrapper[27819]: I0319 09:36:08.581909 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 19 09:36:08.639349 master-0 kubenswrapper[27819]: I0319 09:36:08.639284 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 19 09:36:08.653444 master-0 kubenswrapper[27819]: I0319 09:36:08.653392 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 19 09:36:08.658489 master-0 kubenswrapper[27819]: I0319 09:36:08.658389 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 19 09:36:08.950069 master-0 kubenswrapper[27819]: I0319 09:36:08.949933 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 19 09:36:08.967412 master-0 kubenswrapper[27819]: I0319 09:36:08.967335 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 19 09:36:09.073668 master-0 kubenswrapper[27819]: I0319 09:36:09.073603 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 19 09:36:09.198563 master-0 kubenswrapper[27819]: I0319 09:36:09.198479 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 19 09:36:09.273691 master-0 kubenswrapper[27819]: I0319 09:36:09.273583 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 19 09:36:09.282567 master-0 kubenswrapper[27819]: I0319 09:36:09.282477 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 19 09:36:09.356457 master-0 kubenswrapper[27819]: I0319 09:36:09.356373 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 19 09:36:09.505521 master-0 kubenswrapper[27819]: I0319 09:36:09.505453 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 19 09:36:09.509948 master-0 kubenswrapper[27819]: I0319 09:36:09.509921 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 19 09:36:09.732731 master-0 kubenswrapper[27819]: I0319 09:36:09.732667 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 19 09:36:09.770701 master-0 kubenswrapper[27819]: I0319 09:36:09.770646 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 19 09:36:09.839346 master-0 kubenswrapper[27819]: I0319 09:36:09.839294 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 19 09:36:09.861304 master-0 kubenswrapper[27819]: I0319 09:36:09.861239 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 19 09:36:09.926215 master-0 kubenswrapper[27819]: I0319 09:36:09.926106 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 19 09:36:09.936807 master-0 kubenswrapper[27819]: I0319 09:36:09.936760 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-xd9dt" Mar 19 09:36:09.938194 master-0 kubenswrapper[27819]: I0319 09:36:09.938164 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 19 09:36:09.969493 master-0 kubenswrapper[27819]: I0319 09:36:09.969437 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 19 09:36:10.057742 master-0 kubenswrapper[27819]: I0319 09:36:10.057590 27819 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 19 09:36:10.061072 master-0 kubenswrapper[27819]: I0319 09:36:10.060991 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 19 09:36:10.065659 master-0 kubenswrapper[27819]: I0319 09:36:10.065523 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 19 09:36:10.196409 master-0 kubenswrapper[27819]: I0319 09:36:10.196340 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 19 09:36:10.197161 master-0 kubenswrapper[27819]: I0319 09:36:10.196416 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 19 09:36:10.215472 master-0 kubenswrapper[27819]: I0319 09:36:10.215436 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 19 09:36:10.236370 master-0 kubenswrapper[27819]: I0319 09:36:10.236325 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 19 09:36:10.247181 master-0 kubenswrapper[27819]: I0319 09:36:10.247151 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 19 09:36:10.262049 master-0 kubenswrapper[27819]: I0319 09:36:10.262002 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 19 09:36:10.263867 master-0 kubenswrapper[27819]: I0319 09:36:10.263803 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 19 09:36:10.311969 master-0 kubenswrapper[27819]: I0319 09:36:10.311825 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 19 09:36:10.330117 master-0 kubenswrapper[27819]: I0319 09:36:10.330042 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 19 09:36:10.433112 master-0 kubenswrapper[27819]: I0319 09:36:10.433053 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 19 09:36:10.456288 master-0 kubenswrapper[27819]: I0319 09:36:10.456230 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 19 09:36:10.458663 master-0 kubenswrapper[27819]: I0319 09:36:10.458618 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 19 09:36:10.462196 master-0 kubenswrapper[27819]: I0319 09:36:10.462166 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 19 09:36:10.464506 master-0 kubenswrapper[27819]: I0319 09:36:10.464462 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 19 09:36:10.479454 master-0 kubenswrapper[27819]: I0319 09:36:10.479411 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 19 09:36:10.486806 master-0 kubenswrapper[27819]: I0319 09:36:10.486774 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vkwb4" Mar 19 09:36:10.530323 master-0 kubenswrapper[27819]: I0319 09:36:10.530260 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 19 09:36:10.533408 master-0 kubenswrapper[27819]: I0319 09:36:10.533370 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 19 09:36:10.573995 master-0 kubenswrapper[27819]: I0319 09:36:10.573884 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:36:10.588009 master-0 kubenswrapper[27819]: I0319 09:36:10.587964 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 19 09:36:10.654254 master-0 kubenswrapper[27819]: I0319 09:36:10.654208 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 19 09:36:10.660904 master-0 kubenswrapper[27819]: I0319 09:36:10.660520 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 19 09:36:10.706142 master-0 kubenswrapper[27819]: I0319 09:36:10.706077 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 19 09:36:10.745106 master-0 kubenswrapper[27819]: I0319 09:36:10.745026 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 19 09:36:10.836449 master-0 kubenswrapper[27819]: I0319 09:36:10.836298 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-2hrp4" Mar 19 09:36:10.844435 master-0 kubenswrapper[27819]: I0319 09:36:10.844389 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 19 09:36:10.845476 master-0 kubenswrapper[27819]: I0319 09:36:10.845411 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-4qjxp" Mar 19 09:36:10.985017 master-0 kubenswrapper[27819]: I0319 09:36:10.984967 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p2d2f" Mar 19 09:36:11.000078 master-0 kubenswrapper[27819]: I0319 09:36:11.000034 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 19 09:36:11.002136 master-0 kubenswrapper[27819]: I0319 09:36:11.002106 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 19 09:36:11.069621 master-0 kubenswrapper[27819]: I0319 09:36:11.069532 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 19 09:36:11.071216 master-0 kubenswrapper[27819]: I0319 09:36:11.071159 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 19 09:36:11.077363 master-0 kubenswrapper[27819]: I0319 09:36:11.077317 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 19 09:36:11.083632 master-0 kubenswrapper[27819]: I0319 09:36:11.083599 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 19 09:36:11.099004 master-0 kubenswrapper[27819]: I0319 09:36:11.098969 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:36:11.110039 master-0 kubenswrapper[27819]: I0319 09:36:11.109999 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 19 09:36:11.147440 master-0 kubenswrapper[27819]: I0319 09:36:11.147353 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 19 09:36:11.148306 master-0 kubenswrapper[27819]: I0319 09:36:11.148291 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 19 09:36:11.165287 master-0 kubenswrapper[27819]: I0319 09:36:11.165270 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-g7f7m" Mar 19 09:36:11.177796 master-0 kubenswrapper[27819]: I0319 09:36:11.177759 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 19 09:36:11.198290 master-0 kubenswrapper[27819]: I0319 09:36:11.198253 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 19 09:36:11.211357 master-0 kubenswrapper[27819]: I0319 09:36:11.211321 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 19 09:36:11.239396 master-0 kubenswrapper[27819]: I0319 09:36:11.239343 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 19 09:36:11.275849 master-0 kubenswrapper[27819]: I0319 09:36:11.275787 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 19 09:36:11.277002 master-0 kubenswrapper[27819]: I0319 09:36:11.276955 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x7brr" Mar 19 09:36:11.316583 master-0 kubenswrapper[27819]: I0319 09:36:11.316480 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 19 09:36:11.332245 master-0 kubenswrapper[27819]: I0319 09:36:11.332191 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 19 09:36:11.377466 master-0 kubenswrapper[27819]: I0319 09:36:11.377367 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 19 09:36:11.419162 master-0 kubenswrapper[27819]: I0319 09:36:11.419128 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 19 09:36:11.427601 master-0 kubenswrapper[27819]: I0319 09:36:11.427533 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 19 09:36:11.437639 master-0 kubenswrapper[27819]: I0319 09:36:11.437617 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 19 09:36:11.438853 master-0 kubenswrapper[27819]: I0319 09:36:11.438803 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 19 09:36:11.453448 master-0 kubenswrapper[27819]: I0319 09:36:11.453410 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 19 09:36:11.533454 master-0 kubenswrapper[27819]: I0319 09:36:11.533399 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 19 09:36:11.573753 master-0 kubenswrapper[27819]: I0319 09:36:11.573694 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-fxmqq" Mar 19 09:36:11.590610 master-0 kubenswrapper[27819]: I0319 09:36:11.590568 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 19 09:36:11.611850 master-0 kubenswrapper[27819]: I0319 09:36:11.611806 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 19 09:36:11.648853 master-0 kubenswrapper[27819]: I0319 09:36:11.648734 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 19 09:36:11.695120 master-0 kubenswrapper[27819]: I0319 09:36:11.695045 27819 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 19 09:36:11.703258 master-0 kubenswrapper[27819]: I0319 09:36:11.703232 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4tu9qkfhfujlu" Mar 19 09:36:11.728717 master-0 kubenswrapper[27819]: I0319 09:36:11.728666 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 19 09:36:11.789205 master-0 kubenswrapper[27819]: I0319 09:36:11.789163 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 19 09:36:11.918050 master-0 kubenswrapper[27819]: I0319 09:36:11.917913 27819 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 19 09:36:11.919909 master-0 kubenswrapper[27819]: I0319 09:36:11.919866 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 19 09:36:11.924168 master-0 kubenswrapper[27819]: I0319 09:36:11.924089 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=27.924069938 podStartE2EDuration="27.924069938s" podCreationTimestamp="2026-03-19 09:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:36:03.035298814 +0000 UTC m=+147.956876516" watchObservedRunningTime="2026-03-19 09:36:11.924069938 +0000 UTC m=+156.845647630" Mar 19 09:36:11.924714 master-0 kubenswrapper[27819]: I0319 09:36:11.924682 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 09:36:11.924789 master-0 kubenswrapper[27819]: I0319 09:36:11.924722 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 19 09:36:11.934872 master-0 kubenswrapper[27819]: I0319 09:36:11.934374 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 19 09:36:11.945308 master-0 kubenswrapper[27819]: I0319 09:36:11.945273 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 19 09:36:11.953012 master-0 kubenswrapper[27819]: I0319 09:36:11.952939 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=8.952918095 podStartE2EDuration="8.952918095s" podCreationTimestamp="2026-03-19 09:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:36:11.948451202 +0000 UTC m=+156.870028914" watchObservedRunningTime="2026-03-19 09:36:11.952918095 +0000 UTC m=+156.874495787" Mar 19 09:36:11.966514 master-0 kubenswrapper[27819]: I0319 09:36:11.966477 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 19 09:36:11.980341 master-0 kubenswrapper[27819]: I0319 09:36:11.980314 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 19 09:36:11.994746 master-0 kubenswrapper[27819]: I0319 09:36:11.994723 27819 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 19 09:36:12.007841 master-0 kubenswrapper[27819]: I0319 09:36:12.007745 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-2ncgt" Mar 19 09:36:12.072608 master-0 kubenswrapper[27819]: I0319 09:36:12.072521 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 19 09:36:12.103677 master-0 kubenswrapper[27819]: I0319 09:36:12.103642 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 19 09:36:12.107490 master-0 kubenswrapper[27819]: I0319 09:36:12.107451 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 19 09:36:12.119949 master-0 kubenswrapper[27819]: I0319 09:36:12.119911 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-45rfb" Mar 19 09:36:12.152527 master-0 kubenswrapper[27819]: I0319 09:36:12.152469 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 19 09:36:12.162059 master-0 kubenswrapper[27819]: I0319 09:36:12.162009 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 19 09:36:12.184358 master-0 kubenswrapper[27819]: I0319 09:36:12.184264 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 19 09:36:12.198639 master-0 kubenswrapper[27819]: I0319 09:36:12.198595 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 19 09:36:12.227827 master-0 kubenswrapper[27819]: I0319 09:36:12.227768 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 19 09:36:12.237273 master-0 kubenswrapper[27819]: I0319 09:36:12.237231 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 19 09:36:12.245675 master-0 kubenswrapper[27819]: I0319 09:36:12.245622 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 19 09:36:12.347760 master-0 kubenswrapper[27819]: I0319 09:36:12.347712 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 19 09:36:12.411556 master-0 kubenswrapper[27819]: I0319 09:36:12.411494 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 19 09:36:12.426425 master-0 kubenswrapper[27819]: I0319 09:36:12.425227 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 19 09:36:12.463305 master-0 kubenswrapper[27819]: I0319 09:36:12.463094 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 19 09:36:12.474251 master-0 kubenswrapper[27819]: I0319 09:36:12.474219 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 19 09:36:12.480314 master-0 kubenswrapper[27819]: I0319 09:36:12.480265 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 19 09:36:12.518999 master-0 kubenswrapper[27819]: I0319 09:36:12.518964 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 19 09:36:12.520358 master-0 kubenswrapper[27819]: I0319 09:36:12.520291 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 19 09:36:12.536909 master-0 kubenswrapper[27819]: I0319 09:36:12.536809 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 19 09:36:12.547890 master-0 kubenswrapper[27819]: I0319 09:36:12.547820 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 19 09:36:12.552758 master-0 kubenswrapper[27819]: I0319 09:36:12.552717 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 19 09:36:12.588427 master-0 kubenswrapper[27819]: I0319 09:36:12.588328 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 19 09:36:12.618249 master-0 kubenswrapper[27819]: I0319 09:36:12.618141 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-dq4bt" Mar 19 09:36:12.659611 master-0 kubenswrapper[27819]: I0319 09:36:12.659526 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 19 09:36:12.670359 master-0 kubenswrapper[27819]: I0319 09:36:12.670309 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 19 09:36:12.672292 master-0 kubenswrapper[27819]: I0319 09:36:12.672272 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 19 09:36:12.696210 master-0 kubenswrapper[27819]: I0319 09:36:12.696166 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 19 09:36:12.705718 master-0 kubenswrapper[27819]: I0319 09:36:12.705688 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 19 09:36:12.720573 master-0 kubenswrapper[27819]: I0319 09:36:12.720435 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 19 09:36:12.754447 master-0 kubenswrapper[27819]: I0319 09:36:12.754390 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-nst6c" Mar 19 09:36:12.755086 master-0 kubenswrapper[27819]: I0319 09:36:12.755029 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:36:12.760270 master-0 kubenswrapper[27819]: I0319 09:36:12.760230 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 19 09:36:12.824863 master-0 kubenswrapper[27819]: I0319 09:36:12.824809 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 19 09:36:12.981618 master-0 kubenswrapper[27819]: I0319 09:36:12.981450 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 19 09:36:13.022283 master-0 kubenswrapper[27819]: I0319 09:36:13.022237 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 19 09:36:13.125804 master-0 kubenswrapper[27819]: I0319 09:36:13.125737 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 19 09:36:13.128246 master-0 kubenswrapper[27819]: I0319 09:36:13.128202 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 19 09:36:13.155798 master-0 kubenswrapper[27819]: I0319 09:36:13.155724 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:36:13.161048 master-0 kubenswrapper[27819]: I0319 09:36:13.161028 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 19 09:36:13.192272 master-0 kubenswrapper[27819]: I0319 09:36:13.192236 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-nf86g" Mar 19 09:36:13.236495 master-0 kubenswrapper[27819]: I0319 09:36:13.236341 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 19 09:36:13.251773 master-0 kubenswrapper[27819]: I0319 09:36:13.251735 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 19 09:36:13.293770 master-0 kubenswrapper[27819]: I0319 09:36:13.293602 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 19 09:36:13.377093 master-0 kubenswrapper[27819]: I0319 09:36:13.377033 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 19 09:36:13.383457 master-0 kubenswrapper[27819]: I0319 09:36:13.383422 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 19 09:36:13.459973 master-0 kubenswrapper[27819]: I0319 09:36:13.459929 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 19 09:36:13.470571 master-0 kubenswrapper[27819]: I0319 09:36:13.470522 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-t4gft" Mar 19 09:36:13.512938 master-0 kubenswrapper[27819]: I0319 09:36:13.512833 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 19 09:36:13.513163 master-0 kubenswrapper[27819]: I0319 09:36:13.513118 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 19 09:36:13.532343 master-0 kubenswrapper[27819]: I0319 09:36:13.532293 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-sx4vp" Mar 19 09:36:13.545661 master-0 kubenswrapper[27819]: I0319 09:36:13.545623 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 19 09:36:13.596165 master-0 kubenswrapper[27819]: I0319 09:36:13.596113 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 19 09:36:13.602260 master-0 kubenswrapper[27819]: I0319 09:36:13.602219 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 19 09:36:13.658910 master-0 kubenswrapper[27819]: I0319 09:36:13.658861 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 19 09:36:13.681712 master-0 kubenswrapper[27819]: I0319 09:36:13.681675 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 19 09:36:13.784948 master-0 kubenswrapper[27819]: I0319 09:36:13.784830 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 19 09:36:13.819149 master-0 kubenswrapper[27819]: I0319 09:36:13.819091 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 19 09:36:13.840788 master-0 kubenswrapper[27819]: I0319 09:36:13.840742 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 19 09:36:13.864130 master-0 kubenswrapper[27819]: I0319 09:36:13.864063 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 19 09:36:13.896667 master-0 kubenswrapper[27819]: I0319 09:36:13.896604 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 19 09:36:14.043174 master-0 kubenswrapper[27819]: I0319 09:36:14.043055 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 19 09:36:14.043477 master-0 kubenswrapper[27819]: I0319 09:36:14.043457 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 19 09:36:14.054657 master-0 kubenswrapper[27819]: I0319 09:36:14.054616 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 19 09:36:14.056408 master-0 kubenswrapper[27819]: I0319 09:36:14.056384 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 19 09:36:14.070021 master-0 kubenswrapper[27819]: I0319 09:36:14.069971 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 19 09:36:14.089844 master-0 kubenswrapper[27819]: I0319 09:36:14.089789 27819 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:36:14.090053 master-0 kubenswrapper[27819]: I0319 09:36:14.090019 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" containerID="cri-o://efa84227f653a981cbac9a45ac278327aaa9aa4a65f7ec07cb25ef705470a4fa" gracePeriod=5 Mar 19 09:36:14.101752 master-0 kubenswrapper[27819]: I0319 09:36:14.101702 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 19 09:36:14.107430 master-0 kubenswrapper[27819]: I0319 09:36:14.107391 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 19 09:36:14.134592 master-0 kubenswrapper[27819]: I0319 09:36:14.134533 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 19 09:36:14.135224 master-0 kubenswrapper[27819]: I0319 09:36:14.135192 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 19 09:36:14.212566 master-0 kubenswrapper[27819]: I0319 09:36:14.212496 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 19 09:36:14.239524 master-0 kubenswrapper[27819]: I0319 09:36:14.239369 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 19 09:36:14.268679 master-0 kubenswrapper[27819]: I0319 09:36:14.267355 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 19 09:36:14.373754 master-0 kubenswrapper[27819]: I0319 09:36:14.373246 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 19 09:36:14.436778 master-0 kubenswrapper[27819]: I0319 09:36:14.436725 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-4d2wn" Mar 19 09:36:14.456246 master-0 kubenswrapper[27819]: I0319 09:36:14.455949 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 19 09:36:14.502154 master-0 kubenswrapper[27819]: I0319 09:36:14.502104 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 19 09:36:14.505652 master-0 kubenswrapper[27819]: I0319 09:36:14.505609 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 19 09:36:14.550566 master-0 kubenswrapper[27819]: I0319 09:36:14.550486 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 19 09:36:14.557127 master-0 kubenswrapper[27819]: I0319 09:36:14.557062 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 19 09:36:14.580910 master-0 kubenswrapper[27819]: I0319 09:36:14.580859 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 19 09:36:14.632126 master-0 kubenswrapper[27819]: I0319 09:36:14.632024 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-qxk5n" Mar 19 09:36:14.643080 master-0 kubenswrapper[27819]: I0319 09:36:14.643018 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 19 09:36:14.666531 master-0 kubenswrapper[27819]: I0319 09:36:14.666476 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 19 09:36:14.687400 master-0 kubenswrapper[27819]: I0319 09:36:14.687336 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 19 09:36:14.695998 master-0 kubenswrapper[27819]: I0319 09:36:14.695951 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 19 09:36:14.712822 master-0 kubenswrapper[27819]: I0319 09:36:14.712782 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 19 09:36:14.728448 master-0 kubenswrapper[27819]: I0319 09:36:14.728400 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 19 09:36:14.770101 master-0 kubenswrapper[27819]: I0319 09:36:14.770049 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 19 09:36:14.770369 master-0 kubenswrapper[27819]: I0319 09:36:14.770338 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 19 09:36:14.779108 master-0 kubenswrapper[27819]: I0319 09:36:14.779075 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 19 09:36:14.805578 master-0 kubenswrapper[27819]: I0319 09:36:14.805489 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-cpq7s" Mar 19 09:36:14.848736 master-0 kubenswrapper[27819]: I0319 09:36:14.848573 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 19 09:36:14.931851 master-0 kubenswrapper[27819]: I0319 09:36:14.931751 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 19 09:36:14.946081 master-0 kubenswrapper[27819]: I0319 09:36:14.946014 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 19 09:36:14.966609 master-0 kubenswrapper[27819]: I0319 09:36:14.966559 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 19 09:36:14.966882 master-0 kubenswrapper[27819]: I0319 09:36:14.966579 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-nlbmt" Mar 19 09:36:14.971806 master-0 kubenswrapper[27819]: I0319 09:36:14.971746 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 19 09:36:15.015735 master-0 kubenswrapper[27819]: I0319 09:36:15.015684 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xmjpx" Mar 19 09:36:15.059694 master-0 kubenswrapper[27819]: I0319 09:36:15.059643 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 19 09:36:15.092456 master-0 kubenswrapper[27819]: I0319 09:36:15.092404 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 19 09:36:15.096403 master-0 kubenswrapper[27819]: I0319 09:36:15.096374 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 19 09:36:15.113050 master-0 kubenswrapper[27819]: I0319 09:36:15.112996 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 19 09:36:15.192219 master-0 kubenswrapper[27819]: I0319 09:36:15.192102 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 19 09:36:15.218090 master-0 kubenswrapper[27819]: I0319 09:36:15.218018 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 19 09:36:15.291672 master-0 kubenswrapper[27819]: I0319 09:36:15.291635 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 19 09:36:15.312795 master-0 kubenswrapper[27819]: I0319 09:36:15.312746 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 19 09:36:15.324844 master-0 kubenswrapper[27819]: I0319 09:36:15.324800 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-sjg6x" Mar 19 09:36:15.326705 master-0 kubenswrapper[27819]: I0319 09:36:15.326685 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 19 09:36:15.352831 master-0 kubenswrapper[27819]: I0319 09:36:15.352793 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 19 09:36:15.363411 master-0 kubenswrapper[27819]: I0319 09:36:15.363357 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 19 09:36:15.365157 master-0 kubenswrapper[27819]: I0319 09:36:15.365126 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 19 09:36:15.412859 master-0 kubenswrapper[27819]: I0319 09:36:15.412813 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 19 09:36:15.481186 master-0 kubenswrapper[27819]: I0319 09:36:15.481066 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 19 09:36:15.856068 master-0 kubenswrapper[27819]: I0319 09:36:15.856014 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 19 09:36:16.046367 master-0 kubenswrapper[27819]: I0319 09:36:16.046320 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 19 09:36:16.143990 master-0 kubenswrapper[27819]: I0319 09:36:16.143784 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 19 09:36:16.296300 master-0 kubenswrapper[27819]: I0319 09:36:16.296243 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-cn6b2" Mar 19 09:36:16.377201 master-0 kubenswrapper[27819]: I0319 09:36:16.377156 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 19 09:36:18.154847 master-0 kubenswrapper[27819]: I0319 09:36:18.154797 27819 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 09:36:18.155653 master-0 kubenswrapper[27819]: I0319 09:36:18.155616 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 09:36:19.190243 master-0 kubenswrapper[27819]: I0319 09:36:19.190181 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 19 09:36:19.190243 master-0 kubenswrapper[27819]: I0319 09:36:19.190240 27819 generic.go:334] "Generic (PLEG): container finished" podID="ebbfbf2b56df0323ba118d68bfdad8b9" containerID="efa84227f653a981cbac9a45ac278327aaa9aa4a65f7ec07cb25ef705470a4fa" exitCode=137 Mar 19 09:36:19.397296 master-0 kubenswrapper[27819]: I0319 09:36:19.397238 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 19 09:36:19.483733 master-0 kubenswrapper[27819]: I0319 09:36:19.483640 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 19 09:36:19.665774 master-0 kubenswrapper[27819]: I0319 09:36:19.665442 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 19 09:36:19.665774 master-0 kubenswrapper[27819]: I0319 09:36:19.665519 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:36:19.713725 master-0 kubenswrapper[27819]: I0319 09:36:19.713638 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 09:36:19.714206 master-0 kubenswrapper[27819]: I0319 09:36:19.713821 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 09:36:19.714206 master-0 kubenswrapper[27819]: I0319 09:36:19.713850 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 09:36:19.714206 master-0 kubenswrapper[27819]: I0319 09:36:19.713922 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 09:36:19.714206 master-0 kubenswrapper[27819]: I0319 09:36:19.713952 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 19 09:36:19.717013 master-0 kubenswrapper[27819]: I0319 09:36:19.714284 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:36:19.717013 master-0 kubenswrapper[27819]: I0319 09:36:19.714324 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests" (OuterVolumeSpecName: "manifests") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:36:19.717013 master-0 kubenswrapper[27819]: I0319 09:36:19.714341 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:36:19.717013 master-0 kubenswrapper[27819]: I0319 09:36:19.714359 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log" (OuterVolumeSpecName: "var-log") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:36:19.720349 master-0 kubenswrapper[27819]: I0319 09:36:19.720291 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:36:19.812046 master-0 kubenswrapper[27819]: I0319 09:36:19.811978 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 19 09:36:19.815339 master-0 kubenswrapper[27819]: I0319 09:36:19.815287 27819 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:36:19.815393 master-0 kubenswrapper[27819]: I0319 09:36:19.815342 27819 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 19 09:36:19.815393 master-0 kubenswrapper[27819]: I0319 09:36:19.815364 27819 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:36:19.815393 master-0 kubenswrapper[27819]: I0319 09:36:19.815380 27819 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:36:19.815479 master-0 kubenswrapper[27819]: I0319 09:36:19.815397 27819 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 19 09:36:20.197193 master-0 kubenswrapper[27819]: I0319 09:36:20.197142 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 19 09:36:20.197726 master-0 kubenswrapper[27819]: I0319 09:36:20.197209 27819 scope.go:117] "RemoveContainer" containerID="efa84227f653a981cbac9a45ac278327aaa9aa4a65f7ec07cb25ef705470a4fa" Mar 19 09:36:20.197726 master-0 kubenswrapper[27819]: I0319 09:36:20.197299 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 19 09:36:20.411909 master-0 kubenswrapper[27819]: I0319 09:36:20.411811 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 19 09:36:21.287922 master-0 kubenswrapper[27819]: I0319 09:36:21.287879 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" path="/var/lib/kubelet/pods/ebbfbf2b56df0323ba118d68bfdad8b9/volumes" Mar 19 09:36:21.288784 master-0 kubenswrapper[27819]: I0319 09:36:21.288767 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 19 09:36:21.304649 master-0 kubenswrapper[27819]: I0319 09:36:21.304531 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:36:21.304649 master-0 kubenswrapper[27819]: I0319 09:36:21.304611 27819 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="9d57b5ad-0b1a-4d43-96fc-9201103038f0" Mar 19 09:36:21.310213 master-0 kubenswrapper[27819]: I0319 09:36:21.310165 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 19 09:36:21.310213 master-0 kubenswrapper[27819]: I0319 09:36:21.310204 27819 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="9d57b5ad-0b1a-4d43-96fc-9201103038f0" Mar 19 09:36:23.931645 master-0 kubenswrapper[27819]: I0319 09:36:23.931469 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 19 09:36:24.539858 master-0 kubenswrapper[27819]: I0319 09:36:24.539775 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 19 09:36:25.155755 master-0 kubenswrapper[27819]: I0319 09:36:25.155535 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 19 09:36:25.211427 master-0 kubenswrapper[27819]: I0319 09:36:25.211342 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-szhzh" Mar 19 09:36:25.590308 master-0 kubenswrapper[27819]: I0319 09:36:25.590040 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 19 09:36:25.941168 master-0 kubenswrapper[27819]: I0319 09:36:25.941122 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 19 09:36:27.504167 master-0 kubenswrapper[27819]: I0319 09:36:27.504122 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 19 09:36:28.049222 master-0 kubenswrapper[27819]: I0319 09:36:28.049166 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 19 09:36:28.155187 master-0 kubenswrapper[27819]: I0319 09:36:28.155122 27819 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 19 09:36:28.155391 master-0 kubenswrapper[27819]: I0319 09:36:28.155191 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 19 09:36:28.155391 master-0 kubenswrapper[27819]: I0319 09:36:28.155252 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:36:28.156420 master-0 kubenswrapper[27819]: I0319 09:36:28.155950 27819 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5be498e28584ea542ce41a8bc159a7ded439c5a053defc650ccce7fc0d099fa0"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 19 09:36:28.156420 master-0 kubenswrapper[27819]: I0319 09:36:28.156149 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" containerID="cri-o://5be498e28584ea542ce41a8bc159a7ded439c5a053defc650ccce7fc0d099fa0" gracePeriod=30 Mar 19 09:36:35.441303 master-0 kubenswrapper[27819]: I0319 09:36:35.441263 27819 scope.go:117] "RemoveContainer" containerID="53fac99b9b6d7113ded13db31c06fb6988d91b7900890060d24517f7c6a3af61" Mar 19 09:36:36.526389 master-0 kubenswrapper[27819]: I0319 09:36:36.526345 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm"] Mar 19 09:36:36.527182 master-0 kubenswrapper[27819]: E0319 09:36:36.527165 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 19 09:36:36.527251 master-0 kubenswrapper[27819]: I0319 09:36:36.527241 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 19 09:36:36.527343 master-0 kubenswrapper[27819]: E0319 09:36:36.527333 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" containerName="installer" Mar 19 09:36:36.527399 master-0 kubenswrapper[27819]: I0319 09:36:36.527389 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" containerName="installer" Mar 19 09:36:36.527596 master-0 kubenswrapper[27819]: I0319 09:36:36.527583 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f73c81-e455-430a-9cb7-c11a61d977ad" containerName="installer" Mar 19 09:36:36.527677 master-0 kubenswrapper[27819]: I0319 09:36:36.527667 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 19 09:36:36.528174 master-0 kubenswrapper[27819]: I0319 09:36:36.528157 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.531454 master-0 kubenswrapper[27819]: I0319 09:36:36.531414 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 09:36:36.531671 master-0 kubenswrapper[27819]: I0319 09:36:36.531646 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-2p9dz" Mar 19 09:36:36.531814 master-0 kubenswrapper[27819]: I0319 09:36:36.531793 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 09:36:36.540620 master-0 kubenswrapper[27819]: I0319 09:36:36.536298 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 09:36:36.540620 master-0 kubenswrapper[27819]: I0319 09:36:36.536322 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 09:36:36.545185 master-0 kubenswrapper[27819]: I0319 09:36:36.545131 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75ff5c579f-hpdrn"] Mar 19 09:36:36.552865 master-0 kubenswrapper[27819]: I0319 09:36:36.549873 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 09:36:36.560284 master-0 kubenswrapper[27819]: I0319 09:36:36.560241 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7"] Mar 19 09:36:36.560516 master-0 kubenswrapper[27819]: I0319 09:36:36.560475 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.564028 master-0 kubenswrapper[27819]: I0319 09:36:36.563846 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" Mar 19 09:36:36.564848 master-0 kubenswrapper[27819]: I0319 09:36:36.564820 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qxvdx" Mar 19 09:36:36.565138 master-0 kubenswrapper[27819]: I0319 09:36:36.565112 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 09:36:36.565270 master-0 kubenswrapper[27819]: I0319 09:36:36.565250 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 09:36:36.566179 master-0 kubenswrapper[27819]: I0319 09:36:36.566152 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 09:36:36.566508 master-0 kubenswrapper[27819]: I0319 09:36:36.566487 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 09:36:36.568363 master-0 kubenswrapper[27819]: I0319 09:36:36.568340 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 19 09:36:36.568612 master-0 kubenswrapper[27819]: I0319 09:36:36.568592 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 09:36:36.570143 master-0 kubenswrapper[27819]: I0319 09:36:36.570112 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-648779ffbc-s842b"] Mar 19 09:36:36.571248 master-0 kubenswrapper[27819]: I0319 09:36:36.571223 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.574405 master-0 kubenswrapper[27819]: I0319 09:36:36.573900 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 09:36:36.575020 master-0 kubenswrapper[27819]: I0319 09:36:36.574987 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 09:36:36.575103 master-0 kubenswrapper[27819]: I0319 09:36:36.575047 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-fmdhh" Mar 19 09:36:36.575208 master-0 kubenswrapper[27819]: I0319 09:36:36.575160 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 09:36:36.575208 master-0 kubenswrapper[27819]: I0319 09:36:36.573909 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 09:36:36.575309 master-0 kubenswrapper[27819]: I0319 09:36:36.573974 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 09:36:36.575309 master-0 kubenswrapper[27819]: I0319 09:36:36.574004 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 09:36:36.575434 master-0 kubenswrapper[27819]: I0319 09:36:36.575417 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 09:36:36.575851 master-0 kubenswrapper[27819]: I0319 09:36:36.575819 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 09:36:36.576297 master-0 kubenswrapper[27819]: I0319 09:36:36.576275 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 09:36:36.577035 master-0 kubenswrapper[27819]: I0319 09:36:36.576701 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 09:36:36.577845 master-0 kubenswrapper[27819]: I0319 09:36:36.577816 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 09:36:36.580578 master-0 kubenswrapper[27819]: I0319 09:36:36.580509 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-274lf"] Mar 19 09:36:36.580941 master-0 kubenswrapper[27819]: I0319 09:36:36.580876 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 09:36:36.581416 master-0 kubenswrapper[27819]: I0319 09:36:36.581381 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.583613 master-0 kubenswrapper[27819]: I0319 09:36:36.583509 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-r6445" Mar 19 09:36:36.584055 master-0 kubenswrapper[27819]: I0319 09:36:36.584028 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 19 09:36:36.587577 master-0 kubenswrapper[27819]: I0319 09:36:36.587531 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 09:36:36.588612 master-0 kubenswrapper[27819]: I0319 09:36:36.588580 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 09:36:36.591707 master-0 kubenswrapper[27819]: I0319 09:36:36.591668 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm"] Mar 19 09:36:36.600051 master-0 kubenswrapper[27819]: I0319 09:36:36.600018 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7"] Mar 19 09:36:36.604226 master-0 kubenswrapper[27819]: I0319 09:36:36.604190 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75ff5c579f-hpdrn"] Mar 19 09:36:36.609769 master-0 kubenswrapper[27819]: I0319 09:36:36.609728 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-648779ffbc-s842b"] Mar 19 09:36:36.709937 master-0 kubenswrapper[27819]: I0319 09:36:36.709881 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-config\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.709937 master-0 kubenswrapper[27819]: I0319 09:36:36.709931 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/76d149f3-0740-4fa2-b514-cae52bb6c8ab-monitoring-plugin-cert\") pod \"monitoring-plugin-6d57747dbc-ql8p7\" (UID: \"76d149f3-0740-4fa2-b514-cae52bb6c8ab\") " pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" Mar 19 09:36:36.710157 master-0 kubenswrapper[27819]: I0319 09:36:36.709957 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fgz2\" (UniqueName: \"kubernetes.io/projected/4828bd2e-735f-41ff-94b1-6beed79ce6d1-kube-api-access-6fgz2\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.710157 master-0 kubenswrapper[27819]: I0319 09:36:36.709981 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-serving-cert\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710157 master-0 kubenswrapper[27819]: I0319 09:36:36.709999 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710157 master-0 kubenswrapper[27819]: I0319 09:36:36.710021 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-router-certs\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710157 master-0 kubenswrapper[27819]: I0319 09:36:36.710090 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4828bd2e-735f-41ff-94b1-6beed79ce6d1-host\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.710382 master-0 kubenswrapper[27819]: I0319 09:36:36.710204 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-audit-policies\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710382 master-0 kubenswrapper[27819]: I0319 09:36:36.710231 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710382 master-0 kubenswrapper[27819]: I0319 09:36:36.710257 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116b207a-55ef-4ac6-94c2-08574c61a8bb-serving-cert\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.710382 master-0 kubenswrapper[27819]: I0319 09:36:36.710281 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-service-ca\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710493 master-0 kubenswrapper[27819]: I0319 09:36:36.710435 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-error\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710527 master-0 kubenswrapper[27819]: I0319 09:36:36.710492 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-config\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.710585 master-0 kubenswrapper[27819]: I0319 09:36:36.710525 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-client-ca\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.710585 master-0 kubenswrapper[27819]: I0319 09:36:36.710575 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-proxy-ca-bundles\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.710647 master-0 kubenswrapper[27819]: I0319 09:36:36.710628 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc8j\" (UniqueName: \"kubernetes.io/projected/ad08921e-91f0-44a8-8d4e-e7cc444f823f-kube-api-access-knc8j\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.710677 master-0 kubenswrapper[27819]: I0319 09:36:36.710654 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-client-ca\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.710712 master-0 kubenswrapper[27819]: I0319 09:36:36.710678 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-session\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710712 master-0 kubenswrapper[27819]: I0319 09:36:36.710696 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8g7w\" (UniqueName: \"kubernetes.io/projected/116b207a-55ef-4ac6-94c2-08574c61a8bb-kube-api-access-t8g7w\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.710768 master-0 kubenswrapper[27819]: I0319 09:36:36.710722 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng42l\" (UniqueName: \"kubernetes.io/projected/e83a3bcd-f590-4e36-846e-255494625539-kube-api-access-ng42l\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710768 master-0 kubenswrapper[27819]: I0319 09:36:36.710746 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad08921e-91f0-44a8-8d4e-e7cc444f823f-serving-cert\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.710768 master-0 kubenswrapper[27819]: I0319 09:36:36.710764 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-cliconfig\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710851 master-0 kubenswrapper[27819]: I0319 09:36:36.710781 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710851 master-0 kubenswrapper[27819]: I0319 09:36:36.710814 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83a3bcd-f590-4e36-846e-255494625539-audit-dir\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710851 master-0 kubenswrapper[27819]: I0319 09:36:36.710836 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-login\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.710933 master-0 kubenswrapper[27819]: I0319 09:36:36.710854 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4828bd2e-735f-41ff-94b1-6beed79ce6d1-serviceca\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.812262 master-0 kubenswrapper[27819]: I0319 09:36:36.812121 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-config\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.812262 master-0 kubenswrapper[27819]: I0319 09:36:36.812191 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/76d149f3-0740-4fa2-b514-cae52bb6c8ab-monitoring-plugin-cert\") pod \"monitoring-plugin-6d57747dbc-ql8p7\" (UID: \"76d149f3-0740-4fa2-b514-cae52bb6c8ab\") " pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" Mar 19 09:36:36.812262 master-0 kubenswrapper[27819]: I0319 09:36:36.812214 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fgz2\" (UniqueName: \"kubernetes.io/projected/4828bd2e-735f-41ff-94b1-6beed79ce6d1-kube-api-access-6fgz2\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.812583 master-0 kubenswrapper[27819]: I0319 09:36:36.812504 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-serving-cert\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.812583 master-0 kubenswrapper[27819]: I0319 09:36:36.812571 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.812679 master-0 kubenswrapper[27819]: I0319 09:36:36.812611 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-router-certs\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.812805 master-0 kubenswrapper[27819]: I0319 09:36:36.812744 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4828bd2e-735f-41ff-94b1-6beed79ce6d1-host\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.812967 master-0 kubenswrapper[27819]: I0319 09:36:36.812922 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-audit-policies\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.813038 master-0 kubenswrapper[27819]: I0319 09:36:36.812995 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.813038 master-0 kubenswrapper[27819]: I0319 09:36:36.812838 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4828bd2e-735f-41ff-94b1-6beed79ce6d1-host\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.813128 master-0 kubenswrapper[27819]: I0319 09:36:36.813050 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116b207a-55ef-4ac6-94c2-08574c61a8bb-serving-cert\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.813128 master-0 kubenswrapper[27819]: I0319 09:36:36.813080 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-service-ca\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.813222 master-0 kubenswrapper[27819]: I0319 09:36:36.813163 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-error\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.813266 master-0 kubenswrapper[27819]: I0319 09:36:36.813225 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-config\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.813266 master-0 kubenswrapper[27819]: I0319 09:36:36.813252 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-client-ca\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.813834 master-0 kubenswrapper[27819]: I0319 09:36:36.813805 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-audit-policies\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.813919 master-0 kubenswrapper[27819]: I0319 09:36:36.813272 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-proxy-ca-bundles\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.813919 master-0 kubenswrapper[27819]: I0319 09:36:36.813870 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-config\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.813987 master-0 kubenswrapper[27819]: I0319 09:36:36.813886 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-client-ca\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.813987 master-0 kubenswrapper[27819]: I0319 09:36:36.813970 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc8j\" (UniqueName: \"kubernetes.io/projected/ad08921e-91f0-44a8-8d4e-e7cc444f823f-kube-api-access-knc8j\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.814047 master-0 kubenswrapper[27819]: I0319 09:36:36.814010 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-session\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.814079 master-0 kubenswrapper[27819]: I0319 09:36:36.814043 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8g7w\" (UniqueName: \"kubernetes.io/projected/116b207a-55ef-4ac6-94c2-08574c61a8bb-kube-api-access-t8g7w\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.814136 master-0 kubenswrapper[27819]: I0319 09:36:36.814107 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng42l\" (UniqueName: \"kubernetes.io/projected/e83a3bcd-f590-4e36-846e-255494625539-kube-api-access-ng42l\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.814244 master-0 kubenswrapper[27819]: I0319 09:36:36.814218 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad08921e-91f0-44a8-8d4e-e7cc444f823f-serving-cert\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.814292 master-0 kubenswrapper[27819]: I0319 09:36:36.814269 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-cliconfig\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.814328 master-0 kubenswrapper[27819]: I0319 09:36:36.814283 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-service-ca\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.814364 master-0 kubenswrapper[27819]: I0319 09:36:36.814294 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.814405 master-0 kubenswrapper[27819]: I0319 09:36:36.814371 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83a3bcd-f590-4e36-846e-255494625539-audit-dir\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.814440 master-0 kubenswrapper[27819]: I0319 09:36:36.814409 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-login\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.814440 master-0 kubenswrapper[27819]: I0319 09:36:36.814434 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4828bd2e-735f-41ff-94b1-6beed79ce6d1-serviceca\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.814676 master-0 kubenswrapper[27819]: I0319 09:36:36.814639 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-client-ca\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.815068 master-0 kubenswrapper[27819]: I0319 09:36:36.815032 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-config\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.815318 master-0 kubenswrapper[27819]: I0319 09:36:36.815281 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4828bd2e-735f-41ff-94b1-6beed79ce6d1-serviceca\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.815390 master-0 kubenswrapper[27819]: I0319 09:36:36.815349 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83a3bcd-f590-4e36-846e-255494625539-audit-dir\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.815453 master-0 kubenswrapper[27819]: I0319 09:36:36.815391 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-client-ca\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.815453 master-0 kubenswrapper[27819]: I0319 09:36:36.815401 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.816213 master-0 kubenswrapper[27819]: I0319 09:36:36.816043 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-cliconfig\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.816213 master-0 kubenswrapper[27819]: I0319 09:36:36.816127 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-serving-cert\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.817609 master-0 kubenswrapper[27819]: I0319 09:36:36.816642 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-proxy-ca-bundles\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.817609 master-0 kubenswrapper[27819]: I0319 09:36:36.817202 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/76d149f3-0740-4fa2-b514-cae52bb6c8ab-monitoring-plugin-cert\") pod \"monitoring-plugin-6d57747dbc-ql8p7\" (UID: \"76d149f3-0740-4fa2-b514-cae52bb6c8ab\") " pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" Mar 19 09:36:36.817967 master-0 kubenswrapper[27819]: I0319 09:36:36.817821 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-error\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.817967 master-0 kubenswrapper[27819]: I0319 09:36:36.817876 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116b207a-55ef-4ac6-94c2-08574c61a8bb-serving-cert\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.818113 master-0 kubenswrapper[27819]: I0319 09:36:36.818061 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.819761 master-0 kubenswrapper[27819]: I0319 09:36:36.819729 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-session\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.820169 master-0 kubenswrapper[27819]: I0319 09:36:36.820134 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-login\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.821175 master-0 kubenswrapper[27819]: I0319 09:36:36.821148 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.821449 master-0 kubenswrapper[27819]: I0319 09:36:36.821422 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad08921e-91f0-44a8-8d4e-e7cc444f823f-serving-cert\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.826072 master-0 kubenswrapper[27819]: I0319 09:36:36.826031 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-router-certs\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.829026 master-0 kubenswrapper[27819]: I0319 09:36:36.828993 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fgz2\" (UniqueName: \"kubernetes.io/projected/4828bd2e-735f-41ff-94b1-6beed79ce6d1-kube-api-access-6fgz2\") pod \"node-ca-274lf\" (UID: \"4828bd2e-735f-41ff-94b1-6beed79ce6d1\") " pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:36.833506 master-0 kubenswrapper[27819]: I0319 09:36:36.833452 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc8j\" (UniqueName: \"kubernetes.io/projected/ad08921e-91f0-44a8-8d4e-e7cc444f823f-kube-api-access-knc8j\") pod \"controller-manager-75ff5c579f-hpdrn\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.839891 master-0 kubenswrapper[27819]: I0319 09:36:36.839846 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng42l\" (UniqueName: \"kubernetes.io/projected/e83a3bcd-f590-4e36-846e-255494625539-kube-api-access-ng42l\") pod \"oauth-openshift-648779ffbc-s842b\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.839971 master-0 kubenswrapper[27819]: I0319 09:36:36.839932 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8g7w\" (UniqueName: \"kubernetes.io/projected/116b207a-55ef-4ac6-94c2-08574c61a8bb-kube-api-access-t8g7w\") pod \"route-controller-manager-766d7bdc4c-jshsm\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.865836 master-0 kubenswrapper[27819]: I0319 09:36:36.865796 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:36.900290 master-0 kubenswrapper[27819]: I0319 09:36:36.900238 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:36.921070 master-0 kubenswrapper[27819]: I0319 09:36:36.921026 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" Mar 19 09:36:36.943762 master-0 kubenswrapper[27819]: I0319 09:36:36.943049 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:36.961139 master-0 kubenswrapper[27819]: I0319 09:36:36.961064 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-274lf" Mar 19 09:36:37.008315 master-0 kubenswrapper[27819]: W0319 09:36:37.008259 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4828bd2e_735f_41ff_94b1_6beed79ce6d1.slice/crio-a56110ce8cb49f64eaf0e8e0986db3087e2ec9ede231c0ccace71f0c25fac747 WatchSource:0}: Error finding container a56110ce8cb49f64eaf0e8e0986db3087e2ec9ede231c0ccace71f0c25fac747: Status 404 returned error can't find the container with id a56110ce8cb49f64eaf0e8e0986db3087e2ec9ede231c0ccace71f0c25fac747 Mar 19 09:36:37.014214 master-0 kubenswrapper[27819]: I0319 09:36:37.012481 27819 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 09:36:37.268531 master-0 kubenswrapper[27819]: I0319 09:36:37.268481 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm"] Mar 19 09:36:37.269827 master-0 kubenswrapper[27819]: W0319 09:36:37.269759 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod116b207a_55ef_4ac6_94c2_08574c61a8bb.slice/crio-be0979bb2a075a64efe2940aff69719767ca6d437b6ca91f3403cc48f2a43fc2 WatchSource:0}: Error finding container be0979bb2a075a64efe2940aff69719767ca6d437b6ca91f3403cc48f2a43fc2: Status 404 returned error can't find the container with id be0979bb2a075a64efe2940aff69719767ca6d437b6ca91f3403cc48f2a43fc2 Mar 19 09:36:37.304580 master-0 kubenswrapper[27819]: I0319 09:36:37.304508 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-274lf" event={"ID":"4828bd2e-735f-41ff-94b1-6beed79ce6d1","Type":"ContainerStarted","Data":"a56110ce8cb49f64eaf0e8e0986db3087e2ec9ede231c0ccace71f0c25fac747"} Mar 19 09:36:37.305526 master-0 kubenswrapper[27819]: I0319 09:36:37.305475 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" event={"ID":"116b207a-55ef-4ac6-94c2-08574c61a8bb","Type":"ContainerStarted","Data":"be0979bb2a075a64efe2940aff69719767ca6d437b6ca91f3403cc48f2a43fc2"} Mar 19 09:36:37.331101 master-0 kubenswrapper[27819]: I0319 09:36:37.331050 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75ff5c579f-hpdrn"] Mar 19 09:36:37.332998 master-0 kubenswrapper[27819]: W0319 09:36:37.332959 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad08921e_91f0_44a8_8d4e_e7cc444f823f.slice/crio-ebe14d8e4f97b9f774d9404b2a09cc152e83aa2f6f896f1eb0c44d20c03f982b WatchSource:0}: Error finding container ebe14d8e4f97b9f774d9404b2a09cc152e83aa2f6f896f1eb0c44d20c03f982b: Status 404 returned error can't find the container with id ebe14d8e4f97b9f774d9404b2a09cc152e83aa2f6f896f1eb0c44d20c03f982b Mar 19 09:36:37.378024 master-0 kubenswrapper[27819]: I0319 09:36:37.377737 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7"] Mar 19 09:36:37.385850 master-0 kubenswrapper[27819]: W0319 09:36:37.385792 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76d149f3_0740_4fa2_b514_cae52bb6c8ab.slice/crio-11293c1d6346b901f7263f47036b4aa0e974982cd3ee64bd23e2eb7307ad7e6d WatchSource:0}: Error finding container 11293c1d6346b901f7263f47036b4aa0e974982cd3ee64bd23e2eb7307ad7e6d: Status 404 returned error can't find the container with id 11293c1d6346b901f7263f47036b4aa0e974982cd3ee64bd23e2eb7307ad7e6d Mar 19 09:36:37.436155 master-0 kubenswrapper[27819]: I0319 09:36:37.436115 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-648779ffbc-s842b"] Mar 19 09:36:37.437328 master-0 kubenswrapper[27819]: W0319 09:36:37.437280 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode83a3bcd_f590_4e36_846e_255494625539.slice/crio-bc99706567a8c71095d839793997e74a49782b66a5d679919190004babb90b86 WatchSource:0}: Error finding container bc99706567a8c71095d839793997e74a49782b66a5d679919190004babb90b86: Status 404 returned error can't find the container with id bc99706567a8c71095d839793997e74a49782b66a5d679919190004babb90b86 Mar 19 09:36:38.320770 master-0 kubenswrapper[27819]: I0319 09:36:38.320707 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" event={"ID":"e83a3bcd-f590-4e36-846e-255494625539","Type":"ContainerStarted","Data":"bc99706567a8c71095d839793997e74a49782b66a5d679919190004babb90b86"} Mar 19 09:36:38.322925 master-0 kubenswrapper[27819]: I0319 09:36:38.322897 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" event={"ID":"ad08921e-91f0-44a8-8d4e-e7cc444f823f","Type":"ContainerStarted","Data":"93dd2dcb0d77ba1d218c6f4db63bfd321874a9c3e6b0bfba2b0c1dd702686dfa"} Mar 19 09:36:38.323022 master-0 kubenswrapper[27819]: I0319 09:36:38.322932 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" event={"ID":"ad08921e-91f0-44a8-8d4e-e7cc444f823f","Type":"ContainerStarted","Data":"ebe14d8e4f97b9f774d9404b2a09cc152e83aa2f6f896f1eb0c44d20c03f982b"} Mar 19 09:36:38.327986 master-0 kubenswrapper[27819]: I0319 09:36:38.323948 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:38.327986 master-0 kubenswrapper[27819]: I0319 09:36:38.326911 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" event={"ID":"116b207a-55ef-4ac6-94c2-08574c61a8bb","Type":"ContainerStarted","Data":"1bdf2841cd9d35fe2d5f5c711a8c9721f92ceec119b3866965043281f9ae7db9"} Mar 19 09:36:38.331626 master-0 kubenswrapper[27819]: I0319 09:36:38.328458 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:38.331626 master-0 kubenswrapper[27819]: I0319 09:36:38.329759 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:36:38.331626 master-0 kubenswrapper[27819]: I0319 09:36:38.329836 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" event={"ID":"76d149f3-0740-4fa2-b514-cae52bb6c8ab","Type":"ContainerStarted","Data":"11293c1d6346b901f7263f47036b4aa0e974982cd3ee64bd23e2eb7307ad7e6d"} Mar 19 09:36:38.340829 master-0 kubenswrapper[27819]: I0319 09:36:38.340787 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:36:38.357465 master-0 kubenswrapper[27819]: I0319 09:36:38.357372 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" podStartSLOduration=534.357329563 podStartE2EDuration="8m54.357329563s" podCreationTimestamp="2026-03-19 09:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:36:38.349043493 +0000 UTC m=+183.270621185" watchObservedRunningTime="2026-03-19 09:36:38.357329563 +0000 UTC m=+183.278907255" Mar 19 09:36:38.422625 master-0 kubenswrapper[27819]: I0319 09:36:38.422510 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" podStartSLOduration=534.422486484 podStartE2EDuration="8m54.422486484s" podCreationTimestamp="2026-03-19 09:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:36:38.3756836 +0000 UTC m=+183.297261312" watchObservedRunningTime="2026-03-19 09:36:38.422486484 +0000 UTC m=+183.344064176" Mar 19 09:36:41.396602 master-0 kubenswrapper[27819]: I0319 09:36:41.396510 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" event={"ID":"e83a3bcd-f590-4e36-846e-255494625539","Type":"ContainerStarted","Data":"eb91127cd4ca2ec42d2aac565b3f52f7b96ff03f93c2022cac6c15068b6a06a3"} Mar 19 09:36:41.397256 master-0 kubenswrapper[27819]: I0319 09:36:41.397215 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:41.398431 master-0 kubenswrapper[27819]: I0319 09:36:41.398390 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-274lf" event={"ID":"4828bd2e-735f-41ff-94b1-6beed79ce6d1","Type":"ContainerStarted","Data":"54140bd7e5b0ee962ebcbbee24110aec4283930adfd3ed2404b5abbd047b3346"} Mar 19 09:36:41.400322 master-0 kubenswrapper[27819]: I0319 09:36:41.400274 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" event={"ID":"76d149f3-0740-4fa2-b514-cae52bb6c8ab","Type":"ContainerStarted","Data":"bd29ea3cc5cafe65dfa0d3186e3ec63fc443f1fa0ada8a5fa58cf181c134a3ba"} Mar 19 09:36:41.401258 master-0 kubenswrapper[27819]: I0319 09:36:41.401217 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" Mar 19 09:36:41.403138 master-0 kubenswrapper[27819]: I0319 09:36:41.403086 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:36:41.407099 master-0 kubenswrapper[27819]: I0319 09:36:41.407032 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" Mar 19 09:36:41.423443 master-0 kubenswrapper[27819]: I0319 09:36:41.423369 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" podStartSLOduration=146.240020657 podStartE2EDuration="2m29.423353311s" podCreationTimestamp="2026-03-19 09:34:12 +0000 UTC" firstStartedPulling="2026-03-19 09:36:37.441154763 +0000 UTC m=+182.362732445" lastFinishedPulling="2026-03-19 09:36:40.624487407 +0000 UTC m=+185.546065099" observedRunningTime="2026-03-19 09:36:41.420127362 +0000 UTC m=+186.341705084" watchObservedRunningTime="2026-03-19 09:36:41.423353311 +0000 UTC m=+186.344931003" Mar 19 09:36:41.455886 master-0 kubenswrapper[27819]: I0319 09:36:41.455808 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6d57747dbc-ql8p7" podStartSLOduration=147.248302457 podStartE2EDuration="2m30.455784869s" podCreationTimestamp="2026-03-19 09:34:11 +0000 UTC" firstStartedPulling="2026-03-19 09:36:37.387414856 +0000 UTC m=+182.308992548" lastFinishedPulling="2026-03-19 09:36:40.594897268 +0000 UTC m=+185.516474960" observedRunningTime="2026-03-19 09:36:41.455624554 +0000 UTC m=+186.377202256" watchObservedRunningTime="2026-03-19 09:36:41.455784869 +0000 UTC m=+186.377362571" Mar 19 09:36:41.477772 master-0 kubenswrapper[27819]: I0319 09:36:41.477677 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-274lf" podStartSLOduration=163.895167391 podStartE2EDuration="2m47.477652254s" podCreationTimestamp="2026-03-19 09:33:54 +0000 UTC" firstStartedPulling="2026-03-19 09:36:37.012410295 +0000 UTC m=+181.933987987" lastFinishedPulling="2026-03-19 09:36:40.594895158 +0000 UTC m=+185.516472850" observedRunningTime="2026-03-19 09:36:41.473622312 +0000 UTC m=+186.395200004" watchObservedRunningTime="2026-03-19 09:36:41.477652254 +0000 UTC m=+186.399229946" Mar 19 09:36:58.201371 master-0 kubenswrapper[27819]: E0319 09:36:58.201298 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3939b09ae7c21557b3dd5ab01349318.slice/crio-conmon-5be498e28584ea542ce41a8bc159a7ded439c5a053defc650ccce7fc0d099fa0.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:36:58.523426 master-0 kubenswrapper[27819]: I0319 09:36:58.523097 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager/1.log" Mar 19 09:36:58.525327 master-0 kubenswrapper[27819]: I0319 09:36:58.525105 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager/0.log" Mar 19 09:36:58.525327 master-0 kubenswrapper[27819]: I0319 09:36:58.525187 27819 generic.go:334] "Generic (PLEG): container finished" podID="d3939b09ae7c21557b3dd5ab01349318" containerID="5be498e28584ea542ce41a8bc159a7ded439c5a053defc650ccce7fc0d099fa0" exitCode=137 Mar 19 09:36:58.525327 master-0 kubenswrapper[27819]: I0319 09:36:58.525244 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerDied","Data":"5be498e28584ea542ce41a8bc159a7ded439c5a053defc650ccce7fc0d099fa0"} Mar 19 09:36:58.525494 master-0 kubenswrapper[27819]: I0319 09:36:58.525438 27819 scope.go:117] "RemoveContainer" containerID="32eb7fb05bc6b163861c244117b54dba57fa4d47af128f0765f0e871f04fa152" Mar 19 09:36:59.537383 master-0 kubenswrapper[27819]: I0319 09:36:59.537319 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager/1.log" Mar 19 09:36:59.538191 master-0 kubenswrapper[27819]: I0319 09:36:59.538140 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d3939b09ae7c21557b3dd5ab01349318","Type":"ContainerStarted","Data":"f3ffe4fec33c46fff754b84bc96e8c84dff07f2714439153ed5a5e81bfd1df38"} Mar 19 09:37:07.380703 master-0 kubenswrapper[27819]: I0319 09:37:07.380631 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:37:08.153705 master-0 kubenswrapper[27819]: I0319 09:37:08.153616 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:37:08.157513 master-0 kubenswrapper[27819]: I0319 09:37:08.157475 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:37:08.598559 master-0 kubenswrapper[27819]: I0319 09:37:08.598492 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:37:19.457380 master-0 kubenswrapper[27819]: I0319 09:37:19.457337 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-6h8lg"] Mar 19 09:37:19.458087 master-0 kubenswrapper[27819]: I0319 09:37:19.458065 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.465241 master-0 kubenswrapper[27819]: I0319 09:37:19.464866 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 19 09:37:19.465241 master-0 kubenswrapper[27819]: I0319 09:37:19.465026 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 19 09:37:19.465241 master-0 kubenswrapper[27819]: I0319 09:37:19.465174 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 19 09:37:19.466798 master-0 kubenswrapper[27819]: I0319 09:37:19.465681 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 19 09:37:19.469820 master-0 kubenswrapper[27819]: I0319 09:37:19.469068 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 19 09:37:19.479765 master-0 kubenswrapper[27819]: I0319 09:37:19.477687 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-6jw6f" Mar 19 09:37:19.499817 master-0 kubenswrapper[27819]: I0319 09:37:19.498599 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-6h8lg"] Mar 19 09:37:19.615848 master-0 kubenswrapper[27819]: I0319 09:37:19.615770 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-config\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.616037 master-0 kubenswrapper[27819]: I0319 09:37:19.615883 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-trusted-ca\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.616037 master-0 kubenswrapper[27819]: I0319 09:37:19.615998 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thvmt\" (UniqueName: \"kubernetes.io/projected/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-kube-api-access-thvmt\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.616110 master-0 kubenswrapper[27819]: I0319 09:37:19.616067 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-serving-cert\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.717404 master-0 kubenswrapper[27819]: I0319 09:37:19.717278 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-config\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.717404 master-0 kubenswrapper[27819]: I0319 09:37:19.717363 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-trusted-ca\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.717640 master-0 kubenswrapper[27819]: I0319 09:37:19.717472 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thvmt\" (UniqueName: \"kubernetes.io/projected/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-kube-api-access-thvmt\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.717640 master-0 kubenswrapper[27819]: I0319 09:37:19.717537 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-serving-cert\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.719201 master-0 kubenswrapper[27819]: I0319 09:37:19.719162 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-config\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.719327 master-0 kubenswrapper[27819]: I0319 09:37:19.719295 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-trusted-ca\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.721598 master-0 kubenswrapper[27819]: I0319 09:37:19.721532 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-serving-cert\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.757200 master-0 kubenswrapper[27819]: I0319 09:37:19.757154 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thvmt\" (UniqueName: \"kubernetes.io/projected/d7327ecd-7cba-4ab6-aa0f-5ff0504c0918-kube-api-access-thvmt\") pod \"console-operator-76b6568d85-6h8lg\" (UID: \"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918\") " pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:19.782980 master-0 kubenswrapper[27819]: I0319 09:37:19.782920 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:20.399427 master-0 kubenswrapper[27819]: I0319 09:37:20.399380 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-6h8lg"] Mar 19 09:37:20.400649 master-0 kubenswrapper[27819]: W0319 09:37:20.400589 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7327ecd_7cba_4ab6_aa0f_5ff0504c0918.slice/crio-7188bb00f4fc48834d0006556c77803dd6fc643b6f0c31913fb420ac02ec3d89 WatchSource:0}: Error finding container 7188bb00f4fc48834d0006556c77803dd6fc643b6f0c31913fb420ac02ec3d89: Status 404 returned error can't find the container with id 7188bb00f4fc48834d0006556c77803dd6fc643b6f0c31913fb420ac02ec3d89 Mar 19 09:37:20.666828 master-0 kubenswrapper[27819]: I0319 09:37:20.666697 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" event={"ID":"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918","Type":"ContainerStarted","Data":"7188bb00f4fc48834d0006556c77803dd6fc643b6f0c31913fb420ac02ec3d89"} Mar 19 09:37:23.687613 master-0 kubenswrapper[27819]: I0319 09:37:23.687528 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" event={"ID":"d7327ecd-7cba-4ab6-aa0f-5ff0504c0918","Type":"ContainerStarted","Data":"19e5ff515b42aa91fd7e559826e563c4ea09e9ddc662bee6839eeacb4fcc16db"} Mar 19 09:37:23.688157 master-0 kubenswrapper[27819]: I0319 09:37:23.688027 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:23.689303 master-0 kubenswrapper[27819]: I0319 09:37:23.689235 27819 patch_prober.go:28] interesting pod/console-operator-76b6568d85-6h8lg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.89:8443/readyz\": dial tcp 10.128.0.89:8443: connect: connection refused" start-of-body= Mar 19 09:37:23.689522 master-0 kubenswrapper[27819]: I0319 09:37:23.689313 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" podUID="d7327ecd-7cba-4ab6-aa0f-5ff0504c0918" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.89:8443/readyz\": dial tcp 10.128.0.89:8443: connect: connection refused" Mar 19 09:37:23.710605 master-0 kubenswrapper[27819]: I0319 09:37:23.710426 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" podStartSLOduration=1.6333165109999999 podStartE2EDuration="4.710399757s" podCreationTimestamp="2026-03-19 09:37:19 +0000 UTC" firstStartedPulling="2026-03-19 09:37:20.402694403 +0000 UTC m=+225.324272105" lastFinishedPulling="2026-03-19 09:37:23.479777659 +0000 UTC m=+228.401355351" observedRunningTime="2026-03-19 09:37:23.706325695 +0000 UTC m=+228.627903387" watchObservedRunningTime="2026-03-19 09:37:23.710399757 +0000 UTC m=+228.631977469" Mar 19 09:37:24.218039 master-0 kubenswrapper[27819]: I0319 09:37:24.217957 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-9mnzh"] Mar 19 09:37:24.218753 master-0 kubenswrapper[27819]: I0319 09:37:24.218724 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-9mnzh" Mar 19 09:37:24.220888 master-0 kubenswrapper[27819]: I0319 09:37:24.220848 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-fbtsj" Mar 19 09:37:24.221196 master-0 kubenswrapper[27819]: I0319 09:37:24.221176 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 19 09:37:24.221418 master-0 kubenswrapper[27819]: I0319 09:37:24.221398 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 19 09:37:24.228378 master-0 kubenswrapper[27819]: I0319 09:37:24.228284 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-9mnzh"] Mar 19 09:37:24.384039 master-0 kubenswrapper[27819]: I0319 09:37:24.383977 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nj8g\" (UniqueName: \"kubernetes.io/projected/786e8363-5cf8-45b3-a02c-70db5d6252f2-kube-api-access-5nj8g\") pod \"downloads-66b8ffb895-9mnzh\" (UID: \"786e8363-5cf8-45b3-a02c-70db5d6252f2\") " pod="openshift-console/downloads-66b8ffb895-9mnzh" Mar 19 09:37:24.485526 master-0 kubenswrapper[27819]: I0319 09:37:24.485362 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nj8g\" (UniqueName: \"kubernetes.io/projected/786e8363-5cf8-45b3-a02c-70db5d6252f2-kube-api-access-5nj8g\") pod \"downloads-66b8ffb895-9mnzh\" (UID: \"786e8363-5cf8-45b3-a02c-70db5d6252f2\") " pod="openshift-console/downloads-66b8ffb895-9mnzh" Mar 19 09:37:24.502618 master-0 kubenswrapper[27819]: I0319 09:37:24.501862 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nj8g\" (UniqueName: \"kubernetes.io/projected/786e8363-5cf8-45b3-a02c-70db5d6252f2-kube-api-access-5nj8g\") pod \"downloads-66b8ffb895-9mnzh\" (UID: \"786e8363-5cf8-45b3-a02c-70db5d6252f2\") " pod="openshift-console/downloads-66b8ffb895-9mnzh" Mar 19 09:37:24.565585 master-0 kubenswrapper[27819]: I0319 09:37:24.565395 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-9mnzh" Mar 19 09:37:24.701481 master-0 kubenswrapper[27819]: I0319 09:37:24.701403 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-6h8lg" Mar 19 09:37:24.953157 master-0 kubenswrapper[27819]: I0319 09:37:24.953041 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-twjd5"] Mar 19 09:37:24.953890 master-0 kubenswrapper[27819]: I0319 09:37:24.953858 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:24.955915 master-0 kubenswrapper[27819]: I0319 09:37:24.955871 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-9m8dj" Mar 19 09:37:24.956127 master-0 kubenswrapper[27819]: I0319 09:37:24.956081 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 19 09:37:24.961595 master-0 kubenswrapper[27819]: I0319 09:37:24.960155 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 19 09:37:24.981933 master-0 kubenswrapper[27819]: W0319 09:37:24.981885 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod786e8363_5cf8_45b3_a02c_70db5d6252f2.slice/crio-c5afa69fb41d6255d249baddf6bdf4c829787bfca209f0eb9ee53c80b3b09cfd WatchSource:0}: Error finding container c5afa69fb41d6255d249baddf6bdf4c829787bfca209f0eb9ee53c80b3b09cfd: Status 404 returned error can't find the container with id c5afa69fb41d6255d249baddf6bdf4c829787bfca209f0eb9ee53c80b3b09cfd Mar 19 09:37:24.989017 master-0 kubenswrapper[27819]: I0319 09:37:24.988969 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-twjd5"] Mar 19 09:37:24.998976 master-0 kubenswrapper[27819]: I0319 09:37:24.998862 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-9mnzh"] Mar 19 09:37:25.095453 master-0 kubenswrapper[27819]: I0319 09:37:25.095386 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3feb366a-14da-479d-852a-d6c185383025-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-twjd5\" (UID: \"3feb366a-14da-479d-852a-d6c185383025\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:25.095453 master-0 kubenswrapper[27819]: I0319 09:37:25.095449 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3feb366a-14da-479d-852a-d6c185383025-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-twjd5\" (UID: \"3feb366a-14da-479d-852a-d6c185383025\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:25.196866 master-0 kubenswrapper[27819]: I0319 09:37:25.196798 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3feb366a-14da-479d-852a-d6c185383025-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-twjd5\" (UID: \"3feb366a-14da-479d-852a-d6c185383025\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:25.197107 master-0 kubenswrapper[27819]: I0319 09:37:25.196976 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3feb366a-14da-479d-852a-d6c185383025-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-twjd5\" (UID: \"3feb366a-14da-479d-852a-d6c185383025\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:25.197843 master-0 kubenswrapper[27819]: I0319 09:37:25.197808 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3feb366a-14da-479d-852a-d6c185383025-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-twjd5\" (UID: \"3feb366a-14da-479d-852a-d6c185383025\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:25.200653 master-0 kubenswrapper[27819]: I0319 09:37:25.200625 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3feb366a-14da-479d-852a-d6c185383025-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-twjd5\" (UID: \"3feb366a-14da-479d-852a-d6c185383025\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:25.302992 master-0 kubenswrapper[27819]: I0319 09:37:25.302862 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" Mar 19 09:37:25.699997 master-0 kubenswrapper[27819]: I0319 09:37:25.699949 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-9mnzh" event={"ID":"786e8363-5cf8-45b3-a02c-70db5d6252f2","Type":"ContainerStarted","Data":"c5afa69fb41d6255d249baddf6bdf4c829787bfca209f0eb9ee53c80b3b09cfd"} Mar 19 09:37:25.766046 master-0 kubenswrapper[27819]: W0319 09:37:25.765999 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3feb366a_14da_479d_852a_d6c185383025.slice/crio-00abf545ed7df018d4cc555c574d1f71ffc5b83eca5c8e179f8eaac402771074 WatchSource:0}: Error finding container 00abf545ed7df018d4cc555c574d1f71ffc5b83eca5c8e179f8eaac402771074: Status 404 returned error can't find the container with id 00abf545ed7df018d4cc555c574d1f71ffc5b83eca5c8e179f8eaac402771074 Mar 19 09:37:25.766474 master-0 kubenswrapper[27819]: I0319 09:37:25.766436 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-twjd5"] Mar 19 09:37:26.706455 master-0 kubenswrapper[27819]: I0319 09:37:26.706393 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" event={"ID":"3feb366a-14da-479d-852a-d6c185383025","Type":"ContainerStarted","Data":"00abf545ed7df018d4cc555c574d1f71ffc5b83eca5c8e179f8eaac402771074"} Mar 19 09:37:27.714597 master-0 kubenswrapper[27819]: I0319 09:37:27.713929 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" event={"ID":"3feb366a-14da-479d-852a-d6c185383025","Type":"ContainerStarted","Data":"78237e143ab64d7443276f648643bfd02c2fafb658a105bd816d7559b061fd14"} Mar 19 09:37:27.718301 master-0 kubenswrapper[27819]: I0319 09:37:27.718239 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5df74776c8-t52bf"] Mar 19 09:37:27.719314 master-0 kubenswrapper[27819]: I0319 09:37:27.719280 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.721166 master-0 kubenswrapper[27819]: I0319 09:37:27.721000 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 19 09:37:27.721677 master-0 kubenswrapper[27819]: I0319 09:37:27.721516 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-vbqsq" Mar 19 09:37:27.721677 master-0 kubenswrapper[27819]: I0319 09:37:27.721583 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 19 09:37:27.721677 master-0 kubenswrapper[27819]: I0319 09:37:27.721602 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 19 09:37:27.723578 master-0 kubenswrapper[27819]: I0319 09:37:27.723202 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 19 09:37:27.732704 master-0 kubenswrapper[27819]: I0319 09:37:27.732649 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 19 09:37:27.743404 master-0 kubenswrapper[27819]: I0319 09:37:27.743319 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-twjd5" podStartSLOduration=2.574352858 podStartE2EDuration="3.743294298s" podCreationTimestamp="2026-03-19 09:37:24 +0000 UTC" firstStartedPulling="2026-03-19 09:37:25.767955655 +0000 UTC m=+230.689533347" lastFinishedPulling="2026-03-19 09:37:26.936897095 +0000 UTC m=+231.858474787" observedRunningTime="2026-03-19 09:37:27.737787726 +0000 UTC m=+232.659365438" watchObservedRunningTime="2026-03-19 09:37:27.743294298 +0000 UTC m=+232.664871990" Mar 19 09:37:27.757677 master-0 kubenswrapper[27819]: I0319 09:37:27.755686 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5df74776c8-t52bf"] Mar 19 09:37:27.846675 master-0 kubenswrapper[27819]: I0319 09:37:27.846629 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-console-config\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.846675 master-0 kubenswrapper[27819]: I0319 09:37:27.846673 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z227m\" (UniqueName: \"kubernetes.io/projected/9450ec1b-239d-45ce-9747-a1b372326025-kube-api-access-z227m\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.846934 master-0 kubenswrapper[27819]: I0319 09:37:27.846717 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-service-ca\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.846934 master-0 kubenswrapper[27819]: I0319 09:37:27.846824 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-oauth-config\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.846934 master-0 kubenswrapper[27819]: I0319 09:37:27.846847 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-serving-cert\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.846934 master-0 kubenswrapper[27819]: I0319 09:37:27.846871 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-oauth-serving-cert\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.949039 master-0 kubenswrapper[27819]: I0319 09:37:27.948957 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-oauth-config\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.949236 master-0 kubenswrapper[27819]: I0319 09:37:27.949043 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-serving-cert\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.949236 master-0 kubenswrapper[27819]: I0319 09:37:27.949107 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-oauth-serving-cert\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.949359 master-0 kubenswrapper[27819]: I0319 09:37:27.949322 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-console-config\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.949418 master-0 kubenswrapper[27819]: I0319 09:37:27.949373 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z227m\" (UniqueName: \"kubernetes.io/projected/9450ec1b-239d-45ce-9747-a1b372326025-kube-api-access-z227m\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.949418 master-0 kubenswrapper[27819]: I0319 09:37:27.949400 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-service-ca\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.950293 master-0 kubenswrapper[27819]: I0319 09:37:27.950257 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-console-config\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.950718 master-0 kubenswrapper[27819]: I0319 09:37:27.950638 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-service-ca\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.950845 master-0 kubenswrapper[27819]: I0319 09:37:27.950788 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-oauth-serving-cert\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.952793 master-0 kubenswrapper[27819]: I0319 09:37:27.952759 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-serving-cert\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.953347 master-0 kubenswrapper[27819]: I0319 09:37:27.953306 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-oauth-config\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:27.966240 master-0 kubenswrapper[27819]: I0319 09:37:27.966162 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z227m\" (UniqueName: \"kubernetes.io/projected/9450ec1b-239d-45ce-9747-a1b372326025-kube-api-access-z227m\") pod \"console-5df74776c8-t52bf\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:28.034161 master-0 kubenswrapper[27819]: I0319 09:37:28.033414 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:28.421812 master-0 kubenswrapper[27819]: I0319 09:37:28.421761 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5df74776c8-t52bf"] Mar 19 09:37:28.423040 master-0 kubenswrapper[27819]: W0319 09:37:28.422999 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9450ec1b_239d_45ce_9747_a1b372326025.slice/crio-71a38fe666175fc054146703e8381aaba96a29b41d815b2f3b825bff34cdc0bd WatchSource:0}: Error finding container 71a38fe666175fc054146703e8381aaba96a29b41d815b2f3b825bff34cdc0bd: Status 404 returned error can't find the container with id 71a38fe666175fc054146703e8381aaba96a29b41d815b2f3b825bff34cdc0bd Mar 19 09:37:28.721073 master-0 kubenswrapper[27819]: I0319 09:37:28.720951 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df74776c8-t52bf" event={"ID":"9450ec1b-239d-45ce-9747-a1b372326025","Type":"ContainerStarted","Data":"71a38fe666175fc054146703e8381aaba96a29b41d815b2f3b825bff34cdc0bd"} Mar 19 09:37:32.747248 master-0 kubenswrapper[27819]: I0319 09:37:32.747152 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df74776c8-t52bf" event={"ID":"9450ec1b-239d-45ce-9747-a1b372326025","Type":"ContainerStarted","Data":"3af19e0a5e7a5d9be4dd5ee061e967c8fc4f24aca73841b63fcdabfeb0b00164"} Mar 19 09:37:32.778054 master-0 kubenswrapper[27819]: I0319 09:37:32.777947 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5df74776c8-t52bf" podStartSLOduration=2.121729062 podStartE2EDuration="5.777931285s" podCreationTimestamp="2026-03-19 09:37:27 +0000 UTC" firstStartedPulling="2026-03-19 09:37:28.426220426 +0000 UTC m=+233.347798118" lastFinishedPulling="2026-03-19 09:37:32.082422649 +0000 UTC m=+237.004000341" observedRunningTime="2026-03-19 09:37:32.772823683 +0000 UTC m=+237.694401395" watchObservedRunningTime="2026-03-19 09:37:32.777931285 +0000 UTC m=+237.699508977" Mar 19 09:37:34.010062 master-0 kubenswrapper[27819]: I0319 09:37:34.009781 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d97df8cb5-9hrl2"] Mar 19 09:37:34.010932 master-0 kubenswrapper[27819]: I0319 09:37:34.010901 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.022089 master-0 kubenswrapper[27819]: I0319 09:37:34.022023 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d97df8cb5-9hrl2"] Mar 19 09:37:34.026295 master-0 kubenswrapper[27819]: I0319 09:37:34.026007 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 19 09:37:34.151564 master-0 kubenswrapper[27819]: I0319 09:37:34.151298 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-service-ca\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.151564 master-0 kubenswrapper[27819]: I0319 09:37:34.151362 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-config\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.151564 master-0 kubenswrapper[27819]: I0319 09:37:34.151387 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-trusted-ca-bundle\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.151564 master-0 kubenswrapper[27819]: I0319 09:37:34.151466 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cplnx\" (UniqueName: \"kubernetes.io/projected/25a9e5e1-e5d5-457d-8c54-8b58dca34985-kube-api-access-cplnx\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.151564 master-0 kubenswrapper[27819]: I0319 09:37:34.151487 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-oauth-serving-cert\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.151564 master-0 kubenswrapper[27819]: I0319 09:37:34.151525 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-oauth-config\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.151926 master-0 kubenswrapper[27819]: I0319 09:37:34.151579 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-serving-cert\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.252947 master-0 kubenswrapper[27819]: I0319 09:37:34.252889 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-oauth-config\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.253179 master-0 kubenswrapper[27819]: I0319 09:37:34.252963 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-serving-cert\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.253179 master-0 kubenswrapper[27819]: I0319 09:37:34.253043 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-service-ca\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.253179 master-0 kubenswrapper[27819]: I0319 09:37:34.253071 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-config\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.253179 master-0 kubenswrapper[27819]: I0319 09:37:34.253097 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-trusted-ca-bundle\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.253179 master-0 kubenswrapper[27819]: I0319 09:37:34.253129 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cplnx\" (UniqueName: \"kubernetes.io/projected/25a9e5e1-e5d5-457d-8c54-8b58dca34985-kube-api-access-cplnx\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.253179 master-0 kubenswrapper[27819]: I0319 09:37:34.253153 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-oauth-serving-cert\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.254071 master-0 kubenswrapper[27819]: I0319 09:37:34.254043 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-oauth-serving-cert\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.254321 master-0 kubenswrapper[27819]: I0319 09:37:34.254293 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-config\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.254659 master-0 kubenswrapper[27819]: I0319 09:37:34.254637 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-service-ca\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.254861 master-0 kubenswrapper[27819]: I0319 09:37:34.254775 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-trusted-ca-bundle\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.256405 master-0 kubenswrapper[27819]: I0319 09:37:34.256384 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-serving-cert\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.257133 master-0 kubenswrapper[27819]: I0319 09:37:34.257095 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-oauth-config\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.269429 master-0 kubenswrapper[27819]: I0319 09:37:34.269270 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cplnx\" (UniqueName: \"kubernetes.io/projected/25a9e5e1-e5d5-457d-8c54-8b58dca34985-kube-api-access-cplnx\") pod \"console-5d97df8cb5-9hrl2\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.354231 master-0 kubenswrapper[27819]: I0319 09:37:34.354173 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:34.485975 master-0 kubenswrapper[27819]: I0319 09:37:34.485901 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75ff5c579f-hpdrn"] Mar 19 09:37:34.485975 master-0 kubenswrapper[27819]: I0319 09:37:34.485967 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm"] Mar 19 09:37:34.486639 master-0 kubenswrapper[27819]: I0319 09:37:34.486120 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" podUID="116b207a-55ef-4ac6-94c2-08574c61a8bb" containerName="route-controller-manager" containerID="cri-o://1bdf2841cd9d35fe2d5f5c711a8c9721f92ceec119b3866965043281f9ae7db9" gracePeriod=30 Mar 19 09:37:34.486639 master-0 kubenswrapper[27819]: I0319 09:37:34.486381 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" podUID="ad08921e-91f0-44a8-8d4e-e7cc444f823f" containerName="controller-manager" containerID="cri-o://93dd2dcb0d77ba1d218c6f4db63bfd321874a9c3e6b0bfba2b0c1dd702686dfa" gracePeriod=30 Mar 19 09:37:34.763185 master-0 kubenswrapper[27819]: I0319 09:37:34.763102 27819 generic.go:334] "Generic (PLEG): container finished" podID="ad08921e-91f0-44a8-8d4e-e7cc444f823f" containerID="93dd2dcb0d77ba1d218c6f4db63bfd321874a9c3e6b0bfba2b0c1dd702686dfa" exitCode=0 Mar 19 09:37:34.763365 master-0 kubenswrapper[27819]: I0319 09:37:34.763193 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" event={"ID":"ad08921e-91f0-44a8-8d4e-e7cc444f823f","Type":"ContainerDied","Data":"93dd2dcb0d77ba1d218c6f4db63bfd321874a9c3e6b0bfba2b0c1dd702686dfa"} Mar 19 09:37:34.765834 master-0 kubenswrapper[27819]: I0319 09:37:34.765803 27819 generic.go:334] "Generic (PLEG): container finished" podID="116b207a-55ef-4ac6-94c2-08574c61a8bb" containerID="1bdf2841cd9d35fe2d5f5c711a8c9721f92ceec119b3866965043281f9ae7db9" exitCode=0 Mar 19 09:37:34.765834 master-0 kubenswrapper[27819]: I0319 09:37:34.765835 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" event={"ID":"116b207a-55ef-4ac6-94c2-08574c61a8bb","Type":"ContainerDied","Data":"1bdf2841cd9d35fe2d5f5c711a8c9721f92ceec119b3866965043281f9ae7db9"} Mar 19 09:37:34.786607 master-0 kubenswrapper[27819]: I0319 09:37:34.786512 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d97df8cb5-9hrl2"] Mar 19 09:37:34.992898 master-0 kubenswrapper[27819]: I0319 09:37:34.992870 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:37:34.997331 master-0 kubenswrapper[27819]: I0319 09:37:34.997307 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170092 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knc8j\" (UniqueName: \"kubernetes.io/projected/ad08921e-91f0-44a8-8d4e-e7cc444f823f-kube-api-access-knc8j\") pod \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170153 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-proxy-ca-bundles\") pod \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170235 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad08921e-91f0-44a8-8d4e-e7cc444f823f-serving-cert\") pod \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170256 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-client-ca\") pod \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170642 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ad08921e-91f0-44a8-8d4e-e7cc444f823f" (UID: "ad08921e-91f0-44a8-8d4e-e7cc444f823f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170651 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ad08921e-91f0-44a8-8d4e-e7cc444f823f" (UID: "ad08921e-91f0-44a8-8d4e-e7cc444f823f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170674 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-config\") pod \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\" (UID: \"ad08921e-91f0-44a8-8d4e-e7cc444f823f\") " Mar 19 09:37:35.170764 master-0 kubenswrapper[27819]: I0319 09:37:35.170749 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-config\") pod \"116b207a-55ef-4ac6-94c2-08574c61a8bb\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " Mar 19 09:37:35.171493 master-0 kubenswrapper[27819]: I0319 09:37:35.170786 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8g7w\" (UniqueName: \"kubernetes.io/projected/116b207a-55ef-4ac6-94c2-08574c61a8bb-kube-api-access-t8g7w\") pod \"116b207a-55ef-4ac6-94c2-08574c61a8bb\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " Mar 19 09:37:35.171493 master-0 kubenswrapper[27819]: I0319 09:37:35.170826 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-client-ca\") pod \"116b207a-55ef-4ac6-94c2-08574c61a8bb\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " Mar 19 09:37:35.171493 master-0 kubenswrapper[27819]: I0319 09:37:35.170863 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116b207a-55ef-4ac6-94c2-08574c61a8bb-serving-cert\") pod \"116b207a-55ef-4ac6-94c2-08574c61a8bb\" (UID: \"116b207a-55ef-4ac6-94c2-08574c61a8bb\") " Mar 19 09:37:35.171493 master-0 kubenswrapper[27819]: I0319 09:37:35.171116 27819 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.171493 master-0 kubenswrapper[27819]: I0319 09:37:35.171133 27819 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.171493 master-0 kubenswrapper[27819]: I0319 09:37:35.171261 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-config" (OuterVolumeSpecName: "config") pod "ad08921e-91f0-44a8-8d4e-e7cc444f823f" (UID: "ad08921e-91f0-44a8-8d4e-e7cc444f823f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:37:35.171493 master-0 kubenswrapper[27819]: I0319 09:37:35.171276 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-config" (OuterVolumeSpecName: "config") pod "116b207a-55ef-4ac6-94c2-08574c61a8bb" (UID: "116b207a-55ef-4ac6-94c2-08574c61a8bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:37:35.171791 master-0 kubenswrapper[27819]: I0319 09:37:35.171752 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "116b207a-55ef-4ac6-94c2-08574c61a8bb" (UID: "116b207a-55ef-4ac6-94c2-08574c61a8bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:37:35.175095 master-0 kubenswrapper[27819]: I0319 09:37:35.174038 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad08921e-91f0-44a8-8d4e-e7cc444f823f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ad08921e-91f0-44a8-8d4e-e7cc444f823f" (UID: "ad08921e-91f0-44a8-8d4e-e7cc444f823f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:37:35.175095 master-0 kubenswrapper[27819]: I0319 09:37:35.174087 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad08921e-91f0-44a8-8d4e-e7cc444f823f-kube-api-access-knc8j" (OuterVolumeSpecName: "kube-api-access-knc8j") pod "ad08921e-91f0-44a8-8d4e-e7cc444f823f" (UID: "ad08921e-91f0-44a8-8d4e-e7cc444f823f"). InnerVolumeSpecName "kube-api-access-knc8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:37:35.175461 master-0 kubenswrapper[27819]: I0319 09:37:35.175422 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/116b207a-55ef-4ac6-94c2-08574c61a8bb-kube-api-access-t8g7w" (OuterVolumeSpecName: "kube-api-access-t8g7w") pod "116b207a-55ef-4ac6-94c2-08574c61a8bb" (UID: "116b207a-55ef-4ac6-94c2-08574c61a8bb"). InnerVolumeSpecName "kube-api-access-t8g7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:37:35.175639 master-0 kubenswrapper[27819]: I0319 09:37:35.175594 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/116b207a-55ef-4ac6-94c2-08574c61a8bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "116b207a-55ef-4ac6-94c2-08574c61a8bb" (UID: "116b207a-55ef-4ac6-94c2-08574c61a8bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:37:35.272437 master-0 kubenswrapper[27819]: I0319 09:37:35.272358 27819 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad08921e-91f0-44a8-8d4e-e7cc444f823f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.272437 master-0 kubenswrapper[27819]: I0319 09:37:35.272406 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad08921e-91f0-44a8-8d4e-e7cc444f823f-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.272437 master-0 kubenswrapper[27819]: I0319 09:37:35.272417 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.272437 master-0 kubenswrapper[27819]: I0319 09:37:35.272429 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8g7w\" (UniqueName: \"kubernetes.io/projected/116b207a-55ef-4ac6-94c2-08574c61a8bb-kube-api-access-t8g7w\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.272437 master-0 kubenswrapper[27819]: I0319 09:37:35.272443 27819 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/116b207a-55ef-4ac6-94c2-08574c61a8bb-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.272437 master-0 kubenswrapper[27819]: I0319 09:37:35.272452 27819 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/116b207a-55ef-4ac6-94c2-08574c61a8bb-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.273046 master-0 kubenswrapper[27819]: I0319 09:37:35.272464 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knc8j\" (UniqueName: \"kubernetes.io/projected/ad08921e-91f0-44a8-8d4e-e7cc444f823f-kube-api-access-knc8j\") on node \"master-0\" DevicePath \"\"" Mar 19 09:37:35.776179 master-0 kubenswrapper[27819]: I0319 09:37:35.776121 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" event={"ID":"116b207a-55ef-4ac6-94c2-08574c61a8bb","Type":"ContainerDied","Data":"be0979bb2a075a64efe2940aff69719767ca6d437b6ca91f3403cc48f2a43fc2"} Mar 19 09:37:35.776393 master-0 kubenswrapper[27819]: I0319 09:37:35.776183 27819 scope.go:117] "RemoveContainer" containerID="1bdf2841cd9d35fe2d5f5c711a8c9721f92ceec119b3866965043281f9ae7db9" Mar 19 09:37:35.776393 master-0 kubenswrapper[27819]: I0319 09:37:35.776149 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm" Mar 19 09:37:35.778566 master-0 kubenswrapper[27819]: I0319 09:37:35.778515 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d97df8cb5-9hrl2" event={"ID":"25a9e5e1-e5d5-457d-8c54-8b58dca34985","Type":"ContainerStarted","Data":"2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647"} Mar 19 09:37:35.778566 master-0 kubenswrapper[27819]: I0319 09:37:35.778563 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d97df8cb5-9hrl2" event={"ID":"25a9e5e1-e5d5-457d-8c54-8b58dca34985","Type":"ContainerStarted","Data":"fad7b79a4d5a2917f8de6bde47827afb5ec2350be21021ae503c28b7bbf7724a"} Mar 19 09:37:35.781699 master-0 kubenswrapper[27819]: I0319 09:37:35.781668 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" Mar 19 09:37:35.781828 master-0 kubenswrapper[27819]: I0319 09:37:35.781649 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ff5c579f-hpdrn" event={"ID":"ad08921e-91f0-44a8-8d4e-e7cc444f823f","Type":"ContainerDied","Data":"ebe14d8e4f97b9f774d9404b2a09cc152e83aa2f6f896f1eb0c44d20c03f982b"} Mar 19 09:37:35.794726 master-0 kubenswrapper[27819]: I0319 09:37:35.794674 27819 scope.go:117] "RemoveContainer" containerID="93dd2dcb0d77ba1d218c6f4db63bfd321874a9c3e6b0bfba2b0c1dd702686dfa" Mar 19 09:37:36.000830 master-0 kubenswrapper[27819]: I0319 09:37:36.000758 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57695cbd55-nbp9m"] Mar 19 09:37:36.001126 master-0 kubenswrapper[27819]: E0319 09:37:36.001098 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="116b207a-55ef-4ac6-94c2-08574c61a8bb" containerName="route-controller-manager" Mar 19 09:37:36.001259 master-0 kubenswrapper[27819]: I0319 09:37:36.001134 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="116b207a-55ef-4ac6-94c2-08574c61a8bb" containerName="route-controller-manager" Mar 19 09:37:36.001259 master-0 kubenswrapper[27819]: E0319 09:37:36.001169 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad08921e-91f0-44a8-8d4e-e7cc444f823f" containerName="controller-manager" Mar 19 09:37:36.001259 master-0 kubenswrapper[27819]: I0319 09:37:36.001175 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad08921e-91f0-44a8-8d4e-e7cc444f823f" containerName="controller-manager" Mar 19 09:37:36.001468 master-0 kubenswrapper[27819]: I0319 09:37:36.001443 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad08921e-91f0-44a8-8d4e-e7cc444f823f" containerName="controller-manager" Mar 19 09:37:36.001523 master-0 kubenswrapper[27819]: I0319 09:37:36.001509 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="116b207a-55ef-4ac6-94c2-08574c61a8bb" containerName="route-controller-manager" Mar 19 09:37:36.002025 master-0 kubenswrapper[27819]: I0319 09:37:36.001998 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.006078 master-0 kubenswrapper[27819]: I0319 09:37:36.006014 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qxvdx" Mar 19 09:37:36.006805 master-0 kubenswrapper[27819]: I0319 09:37:36.006762 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 19 09:37:36.007030 master-0 kubenswrapper[27819]: I0319 09:37:36.007002 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 19 09:37:36.007179 master-0 kubenswrapper[27819]: I0319 09:37:36.007137 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 19 09:37:36.009298 master-0 kubenswrapper[27819]: I0319 09:37:36.009261 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 19 09:37:36.009468 master-0 kubenswrapper[27819]: I0319 09:37:36.009444 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 19 09:37:36.014822 master-0 kubenswrapper[27819]: I0319 09:37:36.014793 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 19 09:37:36.084954 master-0 kubenswrapper[27819]: I0319 09:37:36.084804 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-config\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.085141 master-0 kubenswrapper[27819]: I0319 09:37:36.084954 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde615b-4324-4867-9ccb-64750f9db2cc-serving-cert\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.085141 master-0 kubenswrapper[27819]: I0319 09:37:36.085042 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-client-ca\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.085705 master-0 kubenswrapper[27819]: I0319 09:37:36.085620 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgtk\" (UniqueName: \"kubernetes.io/projected/9dde615b-4324-4867-9ccb-64750f9db2cc-kube-api-access-dlgtk\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.085834 master-0 kubenswrapper[27819]: I0319 09:37:36.085793 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-proxy-ca-bundles\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.186642 master-0 kubenswrapper[27819]: I0319 09:37:36.186586 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-config\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.186642 master-0 kubenswrapper[27819]: I0319 09:37:36.186632 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde615b-4324-4867-9ccb-64750f9db2cc-serving-cert\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.187435 master-0 kubenswrapper[27819]: I0319 09:37:36.186762 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-client-ca\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.187435 master-0 kubenswrapper[27819]: I0319 09:37:36.186983 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgtk\" (UniqueName: \"kubernetes.io/projected/9dde615b-4324-4867-9ccb-64750f9db2cc-kube-api-access-dlgtk\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.187435 master-0 kubenswrapper[27819]: I0319 09:37:36.187026 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-proxy-ca-bundles\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.187681 master-0 kubenswrapper[27819]: I0319 09:37:36.187647 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-client-ca\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.189638 master-0 kubenswrapper[27819]: I0319 09:37:36.188072 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-proxy-ca-bundles\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.189638 master-0 kubenswrapper[27819]: I0319 09:37:36.188127 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde615b-4324-4867-9ccb-64750f9db2cc-config\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.190150 master-0 kubenswrapper[27819]: I0319 09:37:36.190108 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde615b-4324-4867-9ccb-64750f9db2cc-serving-cert\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.295864 master-0 kubenswrapper[27819]: I0319 09:37:36.295812 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgtk\" (UniqueName: \"kubernetes.io/projected/9dde615b-4324-4867-9ccb-64750f9db2cc-kube-api-access-dlgtk\") pod \"controller-manager-57695cbd55-nbp9m\" (UID: \"9dde615b-4324-4867-9ccb-64750f9db2cc\") " pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.303576 master-0 kubenswrapper[27819]: I0319 09:37:36.303514 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57695cbd55-nbp9m"] Mar 19 09:37:36.325677 master-0 kubenswrapper[27819]: I0319 09:37:36.323676 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:36.400340 master-0 kubenswrapper[27819]: I0319 09:37:36.400260 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm"] Mar 19 09:37:36.402574 master-0 kubenswrapper[27819]: I0319 09:37:36.402517 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-766d7bdc4c-jshsm"] Mar 19 09:37:36.414719 master-0 kubenswrapper[27819]: I0319 09:37:36.413625 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75ff5c579f-hpdrn"] Mar 19 09:37:36.445919 master-0 kubenswrapper[27819]: I0319 09:37:36.445838 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75ff5c579f-hpdrn"] Mar 19 09:37:36.447451 master-0 kubenswrapper[27819]: I0319 09:37:36.447369 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d97df8cb5-9hrl2" podStartSLOduration=3.447345293 podStartE2EDuration="3.447345293s" podCreationTimestamp="2026-03-19 09:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:37:36.444158135 +0000 UTC m=+241.365735827" watchObservedRunningTime="2026-03-19 09:37:36.447345293 +0000 UTC m=+241.368922995" Mar 19 09:37:36.564700 master-0 kubenswrapper[27819]: I0319 09:37:36.564644 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv"] Mar 19 09:37:36.567564 master-0 kubenswrapper[27819]: I0319 09:37:36.567307 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.572185 master-0 kubenswrapper[27819]: I0319 09:37:36.572144 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 19 09:37:36.572669 master-0 kubenswrapper[27819]: I0319 09:37:36.572639 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 19 09:37:36.572749 master-0 kubenswrapper[27819]: I0319 09:37:36.572670 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 19 09:37:36.575694 master-0 kubenswrapper[27819]: I0319 09:37:36.575660 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 19 09:37:36.575873 master-0 kubenswrapper[27819]: I0319 09:37:36.575711 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 19 09:37:36.575873 master-0 kubenswrapper[27819]: I0319 09:37:36.575842 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-2p9dz" Mar 19 09:37:36.582528 master-0 kubenswrapper[27819]: I0319 09:37:36.580176 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv"] Mar 19 09:37:36.697141 master-0 kubenswrapper[27819]: I0319 09:37:36.697033 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0da78b52-8bbe-405c-8f9e-65f9f736f26c-client-ca\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.697141 master-0 kubenswrapper[27819]: I0319 09:37:36.697085 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vljm8\" (UniqueName: \"kubernetes.io/projected/0da78b52-8bbe-405c-8f9e-65f9f736f26c-kube-api-access-vljm8\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.697141 master-0 kubenswrapper[27819]: I0319 09:37:36.697111 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da78b52-8bbe-405c-8f9e-65f9f736f26c-serving-cert\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.697444 master-0 kubenswrapper[27819]: I0319 09:37:36.697159 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da78b52-8bbe-405c-8f9e-65f9f736f26c-config\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.779615 master-0 kubenswrapper[27819]: I0319 09:37:36.779570 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57695cbd55-nbp9m"] Mar 19 09:37:36.780188 master-0 kubenswrapper[27819]: W0319 09:37:36.780157 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9dde615b_4324_4867_9ccb_64750f9db2cc.slice/crio-f6ea86edab4b44fe1c4ddfb6ed560f83ef2a5d7517de9165d165fc5cb58c7f0b WatchSource:0}: Error finding container f6ea86edab4b44fe1c4ddfb6ed560f83ef2a5d7517de9165d165fc5cb58c7f0b: Status 404 returned error can't find the container with id f6ea86edab4b44fe1c4ddfb6ed560f83ef2a5d7517de9165d165fc5cb58c7f0b Mar 19 09:37:36.792683 master-0 kubenswrapper[27819]: I0319 09:37:36.792648 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" event={"ID":"9dde615b-4324-4867-9ccb-64750f9db2cc","Type":"ContainerStarted","Data":"f6ea86edab4b44fe1c4ddfb6ed560f83ef2a5d7517de9165d165fc5cb58c7f0b"} Mar 19 09:37:36.797709 master-0 kubenswrapper[27819]: I0319 09:37:36.797670 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da78b52-8bbe-405c-8f9e-65f9f736f26c-config\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.797768 master-0 kubenswrapper[27819]: I0319 09:37:36.797739 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0da78b52-8bbe-405c-8f9e-65f9f736f26c-client-ca\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.797768 master-0 kubenswrapper[27819]: I0319 09:37:36.797765 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vljm8\" (UniqueName: \"kubernetes.io/projected/0da78b52-8bbe-405c-8f9e-65f9f736f26c-kube-api-access-vljm8\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.797927 master-0 kubenswrapper[27819]: I0319 09:37:36.797787 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da78b52-8bbe-405c-8f9e-65f9f736f26c-serving-cert\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.799338 master-0 kubenswrapper[27819]: I0319 09:37:36.799269 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0da78b52-8bbe-405c-8f9e-65f9f736f26c-client-ca\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.799634 master-0 kubenswrapper[27819]: I0319 09:37:36.799616 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0da78b52-8bbe-405c-8f9e-65f9f736f26c-config\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.801110 master-0 kubenswrapper[27819]: I0319 09:37:36.801081 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0da78b52-8bbe-405c-8f9e-65f9f736f26c-serving-cert\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.813701 master-0 kubenswrapper[27819]: I0319 09:37:36.813669 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vljm8\" (UniqueName: \"kubernetes.io/projected/0da78b52-8bbe-405c-8f9e-65f9f736f26c-kube-api-access-vljm8\") pod \"route-controller-manager-6c7644d9bf-kw5bv\" (UID: \"0da78b52-8bbe-405c-8f9e-65f9f736f26c\") " pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:36.895923 master-0 kubenswrapper[27819]: I0319 09:37:36.895888 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:37.329398 master-0 kubenswrapper[27819]: I0319 09:37:37.329129 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="116b207a-55ef-4ac6-94c2-08574c61a8bb" path="/var/lib/kubelet/pods/116b207a-55ef-4ac6-94c2-08574c61a8bb/volumes" Mar 19 09:37:37.329998 master-0 kubenswrapper[27819]: I0319 09:37:37.329963 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad08921e-91f0-44a8-8d4e-e7cc444f823f" path="/var/lib/kubelet/pods/ad08921e-91f0-44a8-8d4e-e7cc444f823f/volumes" Mar 19 09:37:37.359437 master-0 kubenswrapper[27819]: I0319 09:37:37.358497 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv"] Mar 19 09:37:37.805944 master-0 kubenswrapper[27819]: I0319 09:37:37.805875 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" event={"ID":"9dde615b-4324-4867-9ccb-64750f9db2cc","Type":"ContainerStarted","Data":"3a81f1e36656b838f54f550e1f4f05637c0e9c8e171958e29213d7c804927284"} Mar 19 09:37:37.806278 master-0 kubenswrapper[27819]: I0319 09:37:37.806192 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:37.815289 master-0 kubenswrapper[27819]: I0319 09:37:37.814853 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" event={"ID":"0da78b52-8bbe-405c-8f9e-65f9f736f26c","Type":"ContainerStarted","Data":"4ba2ae8082a10987e279419eb1617db6fcae49240b698d9f14a5e5a65ed24fbe"} Mar 19 09:37:37.815289 master-0 kubenswrapper[27819]: I0319 09:37:37.814904 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" event={"ID":"0da78b52-8bbe-405c-8f9e-65f9f736f26c","Type":"ContainerStarted","Data":"107a26e2f28b72ff40a246f2651236dbc5cc6378f6029e97d584d459aa34b754"} Mar 19 09:37:37.815610 master-0 kubenswrapper[27819]: I0319 09:37:37.815596 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:37.816534 master-0 kubenswrapper[27819]: I0319 09:37:37.816507 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" Mar 19 09:37:37.829616 master-0 kubenswrapper[27819]: I0319 09:37:37.829556 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" Mar 19 09:37:37.833176 master-0 kubenswrapper[27819]: I0319 09:37:37.833123 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57695cbd55-nbp9m" podStartSLOduration=3.83310832 podStartE2EDuration="3.83310832s" podCreationTimestamp="2026-03-19 09:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:37:37.82626315 +0000 UTC m=+242.747840852" watchObservedRunningTime="2026-03-19 09:37:37.83310832 +0000 UTC m=+242.754686012" Mar 19 09:37:37.874671 master-0 kubenswrapper[27819]: I0319 09:37:37.872513 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c7644d9bf-kw5bv" podStartSLOduration=3.872496049 podStartE2EDuration="3.872496049s" podCreationTimestamp="2026-03-19 09:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:37:37.87109822 +0000 UTC m=+242.792675912" watchObservedRunningTime="2026-03-19 09:37:37.872496049 +0000 UTC m=+242.794073741" Mar 19 09:37:38.033799 master-0 kubenswrapper[27819]: I0319 09:37:38.033755 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:38.033799 master-0 kubenswrapper[27819]: I0319 09:37:38.033805 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:38.038136 master-0 kubenswrapper[27819]: I0319 09:37:38.038099 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:38.826421 master-0 kubenswrapper[27819]: I0319 09:37:38.826370 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:37:40.960316 master-0 kubenswrapper[27819]: I0319 09:37:40.960258 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-648779ffbc-s842b"] Mar 19 09:37:44.355363 master-0 kubenswrapper[27819]: I0319 09:37:44.355298 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:44.355363 master-0 kubenswrapper[27819]: I0319 09:37:44.355351 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:44.359858 master-0 kubenswrapper[27819]: I0319 09:37:44.359798 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:44.872970 master-0 kubenswrapper[27819]: I0319 09:37:44.872915 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:37:45.133875 master-0 kubenswrapper[27819]: I0319 09:37:45.133824 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5df74776c8-t52bf"] Mar 19 09:37:54.856333 master-0 kubenswrapper[27819]: I0319 09:37:54.856284 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 09:37:54.858739 master-0 kubenswrapper[27819]: I0319 09:37:54.858704 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.861636 master-0 kubenswrapper[27819]: I0319 09:37:54.860909 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 19 09:37:54.862515 master-0 kubenswrapper[27819]: I0319 09:37:54.862484 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 19 09:37:54.862677 master-0 kubenswrapper[27819]: I0319 09:37:54.862656 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 19 09:37:54.862854 master-0 kubenswrapper[27819]: I0319 09:37:54.862836 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 19 09:37:54.863129 master-0 kubenswrapper[27819]: I0319 09:37:54.862983 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 19 09:37:54.863410 master-0 kubenswrapper[27819]: I0319 09:37:54.863371 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 19 09:37:54.863552 master-0 kubenswrapper[27819]: I0319 09:37:54.863377 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 19 09:37:54.866820 master-0 kubenswrapper[27819]: I0319 09:37:54.866789 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 19 09:37:54.871448 master-0 kubenswrapper[27819]: I0319 09:37:54.871407 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871603 master-0 kubenswrapper[27819]: I0319 09:37:54.871456 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-web-config\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871603 master-0 kubenswrapper[27819]: I0319 09:37:54.871497 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871603 master-0 kubenswrapper[27819]: I0319 09:37:54.871521 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871603 master-0 kubenswrapper[27819]: I0319 09:37:54.871563 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871603 master-0 kubenswrapper[27819]: I0319 09:37:54.871590 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-config-volume\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871842 master-0 kubenswrapper[27819]: I0319 09:37:54.871648 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871842 master-0 kubenswrapper[27819]: I0319 09:37:54.871716 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-config-out\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871842 master-0 kubenswrapper[27819]: I0319 09:37:54.871740 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871842 master-0 kubenswrapper[27819]: I0319 09:37:54.871763 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871842 master-0 kubenswrapper[27819]: I0319 09:37:54.871797 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzgcj\" (UniqueName: \"kubernetes.io/projected/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-kube-api-access-mzgcj\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.871842 master-0 kubenswrapper[27819]: I0319 09:37:54.871830 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.881633 master-0 kubenswrapper[27819]: I0319 09:37:54.880394 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 09:37:54.972832 master-0 kubenswrapper[27819]: I0319 09:37:54.972762 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.972832 master-0 kubenswrapper[27819]: I0319 09:37:54.972836 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-config-out\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.973977 master-0 kubenswrapper[27819]: I0319 09:37:54.973374 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.973977 master-0 kubenswrapper[27819]: I0319 09:37:54.973401 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.973977 master-0 kubenswrapper[27819]: I0319 09:37:54.973656 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzgcj\" (UniqueName: \"kubernetes.io/projected/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-kube-api-access-mzgcj\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.973977 master-0 kubenswrapper[27819]: I0319 09:37:54.973751 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.973977 master-0 kubenswrapper[27819]: I0319 09:37:54.973863 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.973977 master-0 kubenswrapper[27819]: I0319 09:37:54.973898 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-web-config\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.974318 master-0 kubenswrapper[27819]: I0319 09:37:54.974218 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.974318 master-0 kubenswrapper[27819]: I0319 09:37:54.974270 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.974318 master-0 kubenswrapper[27819]: I0319 09:37:54.974303 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.974318 master-0 kubenswrapper[27819]: I0319 09:37:54.974327 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-config-volume\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.975407 master-0 kubenswrapper[27819]: I0319 09:37:54.975185 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.975407 master-0 kubenswrapper[27819]: I0319 09:37:54.975346 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.975656 master-0 kubenswrapper[27819]: I0319 09:37:54.975430 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.978245 master-0 kubenswrapper[27819]: I0319 09:37:54.978180 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.984237 master-0 kubenswrapper[27819]: I0319 09:37:54.983813 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.984237 master-0 kubenswrapper[27819]: I0319 09:37:54.984181 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.984413 master-0 kubenswrapper[27819]: I0319 09:37:54.984223 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-config-out\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.984595 master-0 kubenswrapper[27819]: I0319 09:37:54.984429 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.984661 master-0 kubenswrapper[27819]: I0319 09:37:54.984534 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-config-volume\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.986028 master-0 kubenswrapper[27819]: I0319 09:37:54.985942 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.987751 master-0 kubenswrapper[27819]: I0319 09:37:54.987377 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-web-config\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:54.990982 master-0 kubenswrapper[27819]: I0319 09:37:54.990655 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzgcj\" (UniqueName: \"kubernetes.io/projected/4674b10a-f1c1-4bc6-a366-6ecbaff1977e-kube-api-access-mzgcj\") pod \"alertmanager-main-0\" (UID: \"4674b10a-f1c1-4bc6-a366-6ecbaff1977e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:55.182782 master-0 kubenswrapper[27819]: I0319 09:37:55.182657 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 19 09:37:55.813691 master-0 kubenswrapper[27819]: I0319 09:37:55.813644 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-6f854479d9-fwtt4"] Mar 19 09:37:55.815884 master-0 kubenswrapper[27819]: I0319 09:37:55.815866 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.834751 master-0 kubenswrapper[27819]: I0319 09:37:55.834696 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f854479d9-fwtt4"] Mar 19 09:37:55.835059 master-0 kubenswrapper[27819]: I0319 09:37:55.834927 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 19 09:37:55.835127 master-0 kubenswrapper[27819]: I0319 09:37:55.835013 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 19 09:37:55.835127 master-0 kubenswrapper[27819]: I0319 09:37:55.835072 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 19 09:37:55.835232 master-0 kubenswrapper[27819]: I0319 09:37:55.835188 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-dp0gq2u0plsre" Mar 19 09:37:55.835294 master-0 kubenswrapper[27819]: I0319 09:37:55.834927 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 19 09:37:55.835639 master-0 kubenswrapper[27819]: I0319 09:37:55.835483 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 19 09:37:55.889334 master-0 kubenswrapper[27819]: I0319 09:37:55.889104 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.889963 master-0 kubenswrapper[27819]: I0319 09:37:55.889351 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.890637 master-0 kubenswrapper[27819]: I0319 09:37:55.890305 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-metrics-client-ca\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.890637 master-0 kubenswrapper[27819]: I0319 09:37:55.890577 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.891017 master-0 kubenswrapper[27819]: I0319 09:37:55.890753 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-tls\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.891148 master-0 kubenswrapper[27819]: I0319 09:37:55.891111 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-grpc-tls\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.891216 master-0 kubenswrapper[27819]: I0319 09:37:55.891199 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.896265 master-0 kubenswrapper[27819]: I0319 09:37:55.893047 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq847\" (UniqueName: \"kubernetes.io/projected/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-kube-api-access-qq847\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:55.999988 master-0 kubenswrapper[27819]: I0319 09:37:55.999896 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq847\" (UniqueName: \"kubernetes.io/projected/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-kube-api-access-qq847\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.000379 master-0 kubenswrapper[27819]: I0319 09:37:56.000018 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.000379 master-0 kubenswrapper[27819]: I0319 09:37:56.000080 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.000379 master-0 kubenswrapper[27819]: I0319 09:37:56.000336 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-metrics-client-ca\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.000575 master-0 kubenswrapper[27819]: I0319 09:37:56.000393 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.012221 master-0 kubenswrapper[27819]: I0319 09:37:56.002298 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-tls\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.012221 master-0 kubenswrapper[27819]: I0319 09:37:56.002372 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-grpc-tls\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.012221 master-0 kubenswrapper[27819]: I0319 09:37:56.002400 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.012221 master-0 kubenswrapper[27819]: I0319 09:37:56.003476 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-metrics-client-ca\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.012738 master-0 kubenswrapper[27819]: I0319 09:37:56.012590 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.016775 master-0 kubenswrapper[27819]: I0319 09:37:56.016726 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-tls\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.017625 master-0 kubenswrapper[27819]: I0319 09:37:56.017488 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-grpc-tls\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.020152 master-0 kubenswrapper[27819]: I0319 09:37:56.020113 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.022493 master-0 kubenswrapper[27819]: I0319 09:37:56.022445 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.024472 master-0 kubenswrapper[27819]: I0319 09:37:56.024418 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.028181 master-0 kubenswrapper[27819]: I0319 09:37:56.028137 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq847\" (UniqueName: \"kubernetes.io/projected/5fcb84d4-a7ab-4240-9638-b0e96d9df84f-kube-api-access-qq847\") pod \"thanos-querier-6f854479d9-fwtt4\" (UID: \"5fcb84d4-a7ab-4240-9638-b0e96d9df84f\") " pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:37:56.151894 master-0 kubenswrapper[27819]: I0319 09:37:56.151827 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:38:01.242131 master-0 kubenswrapper[27819]: I0319 09:38:01.242061 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-6f854479d9-fwtt4"] Mar 19 09:38:01.265598 master-0 kubenswrapper[27819]: I0319 09:38:01.265506 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 19 09:38:01.323152 master-0 kubenswrapper[27819]: I0319 09:38:01.321700 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7679cd5c8b-nf76b"] Mar 19 09:38:01.323152 master-0 kubenswrapper[27819]: I0319 09:38:01.322696 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.325315 master-0 kubenswrapper[27819]: I0319 09:38:01.325278 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5hthrq2lhnhte" Mar 19 09:38:01.337030 master-0 kubenswrapper[27819]: I0319 09:38:01.336937 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7c64897fc5-qj6vj"] Mar 19 09:38:01.337260 master-0 kubenswrapper[27819]: I0319 09:38:01.337201 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" podUID="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" containerName="metrics-server" containerID="cri-o://b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4" gracePeriod=170 Mar 19 09:38:01.366599 master-0 kubenswrapper[27819]: I0319 09:38:01.363820 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7679cd5c8b-nf76b"] Mar 19 09:38:01.385969 master-0 kubenswrapper[27819]: I0319 09:38:01.385927 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmc6k\" (UniqueName: \"kubernetes.io/projected/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-kube-api-access-vmc6k\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.386160 master-0 kubenswrapper[27819]: I0319 09:38:01.386142 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-client-ca-bundle\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.386278 master-0 kubenswrapper[27819]: I0319 09:38:01.386260 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-secret-metrics-client-certs\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.388806 master-0 kubenswrapper[27819]: I0319 09:38:01.388781 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-metrics-server-audit-profiles\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.388956 master-0 kubenswrapper[27819]: I0319 09:38:01.388938 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.389114 master-0 kubenswrapper[27819]: I0319 09:38:01.389096 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-audit-log\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.389234 master-0 kubenswrapper[27819]: I0319 09:38:01.389218 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-secret-metrics-server-tls\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.490603 master-0 kubenswrapper[27819]: I0319 09:38:01.490516 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-secret-metrics-client-certs\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.490831 master-0 kubenswrapper[27819]: I0319 09:38:01.490741 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-metrics-server-audit-profiles\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.490831 master-0 kubenswrapper[27819]: I0319 09:38:01.490807 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.490930 master-0 kubenswrapper[27819]: I0319 09:38:01.490886 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-audit-log\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.491201 master-0 kubenswrapper[27819]: I0319 09:38:01.491146 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-secret-metrics-server-tls\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.491423 master-0 kubenswrapper[27819]: I0319 09:38:01.491402 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmc6k\" (UniqueName: \"kubernetes.io/projected/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-kube-api-access-vmc6k\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.491693 master-0 kubenswrapper[27819]: I0319 09:38:01.491600 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-audit-log\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.491819 master-0 kubenswrapper[27819]: I0319 09:38:01.491799 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-client-ca-bundle\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.492360 master-0 kubenswrapper[27819]: I0319 09:38:01.492285 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.492576 master-0 kubenswrapper[27819]: I0319 09:38:01.492531 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-metrics-server-audit-profiles\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.494244 master-0 kubenswrapper[27819]: I0319 09:38:01.494185 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-secret-metrics-client-certs\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.494556 master-0 kubenswrapper[27819]: I0319 09:38:01.494510 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-secret-metrics-server-tls\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.499021 master-0 kubenswrapper[27819]: I0319 09:38:01.498976 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-client-ca-bundle\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.518444 master-0 kubenswrapper[27819]: I0319 09:38:01.518370 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmc6k\" (UniqueName: \"kubernetes.io/projected/5fc4ed5e-6704-49fb-9e35-ae092559c5cb-kube-api-access-vmc6k\") pod \"metrics-server-7679cd5c8b-nf76b\" (UID: \"5fc4ed5e-6704-49fb-9e35-ae092559c5cb\") " pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:01.716393 master-0 kubenswrapper[27819]: I0319 09:38:01.716346 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:02.012636 master-0 kubenswrapper[27819]: I0319 09:38:02.011606 27819 generic.go:334] "Generic (PLEG): container finished" podID="4674b10a-f1c1-4bc6-a366-6ecbaff1977e" containerID="5d39c9d64c56bd8d4db4a44aa2e1bd2e5ddba05729a357479801e32e6d02a489" exitCode=0 Mar 19 09:38:02.012636 master-0 kubenswrapper[27819]: I0319 09:38:02.011684 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerDied","Data":"5d39c9d64c56bd8d4db4a44aa2e1bd2e5ddba05729a357479801e32e6d02a489"} Mar 19 09:38:02.012636 master-0 kubenswrapper[27819]: I0319 09:38:02.011717 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerStarted","Data":"32c4eacbf473428a739a04ac36ca355dc14494f57698603255e192def9fb1235"} Mar 19 09:38:02.028402 master-0 kubenswrapper[27819]: I0319 09:38:02.028278 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-9mnzh" event={"ID":"786e8363-5cf8-45b3-a02c-70db5d6252f2","Type":"ContainerStarted","Data":"ad4d1538b1b5db046c1e5e9d3fb9d4e038ae95855ccd3d2ed1c6bac167d09653"} Mar 19 09:38:02.029427 master-0 kubenswrapper[27819]: I0319 09:38:02.029367 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-9mnzh" Mar 19 09:38:02.031948 master-0 kubenswrapper[27819]: I0319 09:38:02.031867 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" event={"ID":"5fcb84d4-a7ab-4240-9638-b0e96d9df84f","Type":"ContainerStarted","Data":"6434f64ebdc20e49bfd66b246705fff904fde54bb1018f90eae22c5a0552cc95"} Mar 19 09:38:02.032296 master-0 kubenswrapper[27819]: I0319 09:38:02.032243 27819 patch_prober.go:28] interesting pod/downloads-66b8ffb895-9mnzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 19 09:38:02.032296 master-0 kubenswrapper[27819]: I0319 09:38:02.032284 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-9mnzh" podUID="786e8363-5cf8-45b3-a02c-70db5d6252f2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 19 09:38:02.085827 master-0 kubenswrapper[27819]: I0319 09:38:02.085642 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-9mnzh" podStartSLOduration=1.5715693229999999 podStartE2EDuration="38.08560918s" podCreationTimestamp="2026-03-19 09:37:24 +0000 UTC" firstStartedPulling="2026-03-19 09:37:24.98574693 +0000 UTC m=+229.907324642" lastFinishedPulling="2026-03-19 09:38:01.499786807 +0000 UTC m=+266.421364499" observedRunningTime="2026-03-19 09:38:02.076704503 +0000 UTC m=+266.998282205" watchObservedRunningTime="2026-03-19 09:38:02.08560918 +0000 UTC m=+267.007186872" Mar 19 09:38:02.147840 master-0 kubenswrapper[27819]: I0319 09:38:02.147706 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7679cd5c8b-nf76b"] Mar 19 09:38:03.047869 master-0 kubenswrapper[27819]: I0319 09:38:03.047812 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" event={"ID":"5fc4ed5e-6704-49fb-9e35-ae092559c5cb","Type":"ContainerStarted","Data":"1689096a1ca481bb67a104fe91286c7b1258b239151a98013cd6e43a00b454e0"} Mar 19 09:38:03.048443 master-0 kubenswrapper[27819]: I0319 09:38:03.048425 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" event={"ID":"5fc4ed5e-6704-49fb-9e35-ae092559c5cb","Type":"ContainerStarted","Data":"c0b34c05e73a5dbb2b0de7ff7e5afca2c8e4cb7e57399aec539b72936932eaf1"} Mar 19 09:38:03.049358 master-0 kubenswrapper[27819]: I0319 09:38:03.048776 27819 patch_prober.go:28] interesting pod/downloads-66b8ffb895-9mnzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 19 09:38:03.049439 master-0 kubenswrapper[27819]: I0319 09:38:03.049391 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-9mnzh" podUID="786e8363-5cf8-45b3-a02c-70db5d6252f2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 19 09:38:04.057948 master-0 kubenswrapper[27819]: I0319 09:38:04.057875 27819 patch_prober.go:28] interesting pod/downloads-66b8ffb895-9mnzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 19 09:38:04.058602 master-0 kubenswrapper[27819]: I0319 09:38:04.057963 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-9mnzh" podUID="786e8363-5cf8-45b3-a02c-70db5d6252f2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 19 09:38:04.352796 master-0 kubenswrapper[27819]: I0319 09:38:04.352697 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" podStartSLOduration=3.352678561 podStartE2EDuration="3.352678561s" podCreationTimestamp="2026-03-19 09:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:38:04.346528171 +0000 UTC m=+269.268105883" watchObservedRunningTime="2026-03-19 09:38:04.352678561 +0000 UTC m=+269.274256253" Mar 19 09:38:04.567570 master-0 kubenswrapper[27819]: I0319 09:38:04.567499 27819 patch_prober.go:28] interesting pod/downloads-66b8ffb895-9mnzh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 19 09:38:04.567886 master-0 kubenswrapper[27819]: I0319 09:38:04.567723 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-9mnzh" podUID="786e8363-5cf8-45b3-a02c-70db5d6252f2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 19 09:38:04.568175 master-0 kubenswrapper[27819]: I0319 09:38:04.568116 27819 patch_prober.go:28] interesting pod/downloads-66b8ffb895-9mnzh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" start-of-body= Mar 19 09:38:04.568242 master-0 kubenswrapper[27819]: I0319 09:38:04.568189 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-66b8ffb895-9mnzh" podUID="786e8363-5cf8-45b3-a02c-70db5d6252f2" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.90:8080/\": dial tcp 10.128.0.90:8080: connect: connection refused" Mar 19 09:38:05.997770 master-0 kubenswrapper[27819]: I0319 09:38:05.997724 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" podUID="e83a3bcd-f590-4e36-846e-255494625539" containerName="oauth-openshift" containerID="cri-o://eb91127cd4ca2ec42d2aac565b3f52f7b96ff03f93c2022cac6c15068b6a06a3" gracePeriod=15 Mar 19 09:38:06.945372 master-0 kubenswrapper[27819]: I0319 09:38:06.945298 27819 patch_prober.go:28] interesting pod/oauth-openshift-648779ffbc-s842b container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.88:6443/healthz\": dial tcp 10.128.0.88:6443: connect: connection refused" start-of-body= Mar 19 09:38:06.945669 master-0 kubenswrapper[27819]: I0319 09:38:06.945382 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" podUID="e83a3bcd-f590-4e36-846e-255494625539" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.88:6443/healthz\": dial tcp 10.128.0.88:6443: connect: connection refused" Mar 19 09:38:08.091627 master-0 kubenswrapper[27819]: I0319 09:38:08.089790 27819 generic.go:334] "Generic (PLEG): container finished" podID="e83a3bcd-f590-4e36-846e-255494625539" containerID="eb91127cd4ca2ec42d2aac565b3f52f7b96ff03f93c2022cac6c15068b6a06a3" exitCode=0 Mar 19 09:38:08.091627 master-0 kubenswrapper[27819]: I0319 09:38:08.089855 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" event={"ID":"e83a3bcd-f590-4e36-846e-255494625539","Type":"ContainerDied","Data":"eb91127cd4ca2ec42d2aac565b3f52f7b96ff03f93c2022cac6c15068b6a06a3"} Mar 19 09:38:10.183370 master-0 kubenswrapper[27819]: I0319 09:38:10.183304 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5df74776c8-t52bf" podUID="9450ec1b-239d-45ce-9747-a1b372326025" containerName="console" containerID="cri-o://3af19e0a5e7a5d9be4dd5ee061e967c8fc4f24aca73841b63fcdabfeb0b00164" gracePeriod=15 Mar 19 09:38:10.974809 master-0 kubenswrapper[27819]: I0319 09:38:10.974763 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:38:11.106707 master-0 kubenswrapper[27819]: I0319 09:38:11.106657 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5df74776c8-t52bf_9450ec1b-239d-45ce-9747-a1b372326025/console/0.log" Mar 19 09:38:11.106950 master-0 kubenswrapper[27819]: I0319 09:38:11.106714 27819 generic.go:334] "Generic (PLEG): container finished" podID="9450ec1b-239d-45ce-9747-a1b372326025" containerID="3af19e0a5e7a5d9be4dd5ee061e967c8fc4f24aca73841b63fcdabfeb0b00164" exitCode=2 Mar 19 09:38:11.106950 master-0 kubenswrapper[27819]: I0319 09:38:11.106793 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df74776c8-t52bf" event={"ID":"9450ec1b-239d-45ce-9747-a1b372326025","Type":"ContainerDied","Data":"3af19e0a5e7a5d9be4dd5ee061e967c8fc4f24aca73841b63fcdabfeb0b00164"} Mar 19 09:38:11.108553 master-0 kubenswrapper[27819]: I0319 09:38:11.108498 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" event={"ID":"e83a3bcd-f590-4e36-846e-255494625539","Type":"ContainerDied","Data":"bc99706567a8c71095d839793997e74a49782b66a5d679919190004babb90b86"} Mar 19 09:38:11.108618 master-0 kubenswrapper[27819]: I0319 09:38:11.108553 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-648779ffbc-s842b" Mar 19 09:38:11.108618 master-0 kubenswrapper[27819]: I0319 09:38:11.108586 27819 scope.go:117] "RemoveContainer" containerID="eb91127cd4ca2ec42d2aac565b3f52f7b96ff03f93c2022cac6c15068b6a06a3" Mar 19 09:38:11.149925 master-0 kubenswrapper[27819]: I0319 09:38:11.149869 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-service-ca\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.150326 master-0 kubenswrapper[27819]: I0319 09:38:11.150286 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83a3bcd-f590-4e36-846e-255494625539-audit-dir\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.150378 master-0 kubenswrapper[27819]: I0319 09:38:11.150359 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-serving-cert\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.150463 master-0 kubenswrapper[27819]: I0319 09:38:11.150442 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-audit-policies\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.150508 master-0 kubenswrapper[27819]: I0319 09:38:11.150498 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng42l\" (UniqueName: \"kubernetes.io/projected/e83a3bcd-f590-4e36-846e-255494625539-kube-api-access-ng42l\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.150873 master-0 kubenswrapper[27819]: I0319 09:38:11.150603 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e83a3bcd-f590-4e36-846e-255494625539-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:38:11.151414 master-0 kubenswrapper[27819]: I0319 09:38:11.151158 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:38:11.151414 master-0 kubenswrapper[27819]: I0319 09:38:11.151301 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-trusted-ca-bundle\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.151414 master-0 kubenswrapper[27819]: I0319 09:38:11.151339 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-error\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.151414 master-0 kubenswrapper[27819]: I0319 09:38:11.151374 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:38:11.151614 master-0 kubenswrapper[27819]: I0319 09:38:11.151471 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:38:11.151614 master-0 kubenswrapper[27819]: I0319 09:38:11.151580 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-login\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.151683 master-0 kubenswrapper[27819]: I0319 09:38:11.151641 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-ocp-branding-template\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.151855 master-0 kubenswrapper[27819]: I0319 09:38:11.151684 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-router-certs\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.151910 master-0 kubenswrapper[27819]: I0319 09:38:11.151882 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-provider-selection\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.151955 master-0 kubenswrapper[27819]: I0319 09:38:11.151929 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-cliconfig\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.151999 master-0 kubenswrapper[27819]: I0319 09:38:11.151962 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-session\") pod \"e83a3bcd-f590-4e36-846e-255494625539\" (UID: \"e83a3bcd-f590-4e36-846e-255494625539\") " Mar 19 09:38:11.153256 master-0 kubenswrapper[27819]: I0319 09:38:11.153208 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:38:11.153668 master-0 kubenswrapper[27819]: I0319 09:38:11.153636 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83a3bcd-f590-4e36-846e-255494625539-kube-api-access-ng42l" (OuterVolumeSpecName: "kube-api-access-ng42l") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "kube-api-access-ng42l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:38:11.153846 master-0 kubenswrapper[27819]: I0319 09:38:11.153803 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:11.154583 master-0 kubenswrapper[27819]: I0319 09:38:11.154537 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng42l\" (UniqueName: \"kubernetes.io/projected/e83a3bcd-f590-4e36-846e-255494625539-kube-api-access-ng42l\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.154638 master-0 kubenswrapper[27819]: I0319 09:38:11.154587 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.154638 master-0 kubenswrapper[27819]: I0319 09:38:11.154611 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.154638 master-0 kubenswrapper[27819]: I0319 09:38:11.154631 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.154760 master-0 kubenswrapper[27819]: I0319 09:38:11.154648 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.154760 master-0 kubenswrapper[27819]: I0319 09:38:11.154662 27819 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e83a3bcd-f590-4e36-846e-255494625539-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.154760 master-0 kubenswrapper[27819]: I0319 09:38:11.154681 27819 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e83a3bcd-f590-4e36-846e-255494625539-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.154909 master-0 kubenswrapper[27819]: I0319 09:38:11.154831 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:11.157562 master-0 kubenswrapper[27819]: I0319 09:38:11.155331 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:11.161459 master-0 kubenswrapper[27819]: I0319 09:38:11.161410 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:11.161565 master-0 kubenswrapper[27819]: I0319 09:38:11.161453 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:11.161644 master-0 kubenswrapper[27819]: I0319 09:38:11.161530 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:11.162608 master-0 kubenswrapper[27819]: I0319 09:38:11.162572 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e83a3bcd-f590-4e36-846e-255494625539" (UID: "e83a3bcd-f590-4e36-846e-255494625539"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:11.255283 master-0 kubenswrapper[27819]: I0319 09:38:11.255133 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.255283 master-0 kubenswrapper[27819]: I0319 09:38:11.255182 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.255283 master-0 kubenswrapper[27819]: I0319 09:38:11.255202 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.255283 master-0 kubenswrapper[27819]: I0319 09:38:11.255215 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.255283 master-0 kubenswrapper[27819]: I0319 09:38:11.255229 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.255283 master-0 kubenswrapper[27819]: I0319 09:38:11.255240 27819 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e83a3bcd-f590-4e36-846e-255494625539-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:11.883165 master-0 kubenswrapper[27819]: I0319 09:38:11.883123 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5df74776c8-t52bf_9450ec1b-239d-45ce-9747-a1b372326025/console/0.log" Mar 19 09:38:11.883316 master-0 kubenswrapper[27819]: I0319 09:38:11.883209 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:38:12.066829 master-0 kubenswrapper[27819]: I0319 09:38:12.066652 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z227m\" (UniqueName: \"kubernetes.io/projected/9450ec1b-239d-45ce-9747-a1b372326025-kube-api-access-z227m\") pod \"9450ec1b-239d-45ce-9747-a1b372326025\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " Mar 19 09:38:12.066829 master-0 kubenswrapper[27819]: I0319 09:38:12.066741 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-service-ca\") pod \"9450ec1b-239d-45ce-9747-a1b372326025\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " Mar 19 09:38:12.066829 master-0 kubenswrapper[27819]: I0319 09:38:12.066764 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-serving-cert\") pod \"9450ec1b-239d-45ce-9747-a1b372326025\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " Mar 19 09:38:12.066829 master-0 kubenswrapper[27819]: I0319 09:38:12.066785 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-oauth-config\") pod \"9450ec1b-239d-45ce-9747-a1b372326025\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " Mar 19 09:38:12.066829 master-0 kubenswrapper[27819]: I0319 09:38:12.066857 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-oauth-serving-cert\") pod \"9450ec1b-239d-45ce-9747-a1b372326025\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " Mar 19 09:38:12.067232 master-0 kubenswrapper[27819]: I0319 09:38:12.066930 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-console-config\") pod \"9450ec1b-239d-45ce-9747-a1b372326025\" (UID: \"9450ec1b-239d-45ce-9747-a1b372326025\") " Mar 19 09:38:12.067513 master-0 kubenswrapper[27819]: I0319 09:38:12.067480 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-console-config" (OuterVolumeSpecName: "console-config") pod "9450ec1b-239d-45ce-9747-a1b372326025" (UID: "9450ec1b-239d-45ce-9747-a1b372326025"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:38:12.067626 master-0 kubenswrapper[27819]: I0319 09:38:12.067570 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9450ec1b-239d-45ce-9747-a1b372326025" (UID: "9450ec1b-239d-45ce-9747-a1b372326025"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:38:12.067671 master-0 kubenswrapper[27819]: I0319 09:38:12.067583 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-service-ca" (OuterVolumeSpecName: "service-ca") pod "9450ec1b-239d-45ce-9747-a1b372326025" (UID: "9450ec1b-239d-45ce-9747-a1b372326025"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:38:12.069894 master-0 kubenswrapper[27819]: I0319 09:38:12.069846 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9450ec1b-239d-45ce-9747-a1b372326025-kube-api-access-z227m" (OuterVolumeSpecName: "kube-api-access-z227m") pod "9450ec1b-239d-45ce-9747-a1b372326025" (UID: "9450ec1b-239d-45ce-9747-a1b372326025"). InnerVolumeSpecName "kube-api-access-z227m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:38:12.070175 master-0 kubenswrapper[27819]: I0319 09:38:12.070142 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9450ec1b-239d-45ce-9747-a1b372326025" (UID: "9450ec1b-239d-45ce-9747-a1b372326025"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:12.070244 master-0 kubenswrapper[27819]: I0319 09:38:12.070198 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9450ec1b-239d-45ce-9747-a1b372326025" (UID: "9450ec1b-239d-45ce-9747-a1b372326025"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:38:12.117033 master-0 kubenswrapper[27819]: I0319 09:38:12.116879 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5df74776c8-t52bf_9450ec1b-239d-45ce-9747-a1b372326025/console/0.log" Mar 19 09:38:12.117263 master-0 kubenswrapper[27819]: I0319 09:38:12.117031 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5df74776c8-t52bf" event={"ID":"9450ec1b-239d-45ce-9747-a1b372326025","Type":"ContainerDied","Data":"71a38fe666175fc054146703e8381aaba96a29b41d815b2f3b825bff34cdc0bd"} Mar 19 09:38:12.117263 master-0 kubenswrapper[27819]: I0319 09:38:12.117040 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5df74776c8-t52bf" Mar 19 09:38:12.117263 master-0 kubenswrapper[27819]: I0319 09:38:12.117078 27819 scope.go:117] "RemoveContainer" containerID="3af19e0a5e7a5d9be4dd5ee061e967c8fc4f24aca73841b63fcdabfeb0b00164" Mar 19 09:38:12.168508 master-0 kubenswrapper[27819]: I0319 09:38:12.168427 27819 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:12.168508 master-0 kubenswrapper[27819]: I0319 09:38:12.168471 27819 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:12.168508 master-0 kubenswrapper[27819]: I0319 09:38:12.168482 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z227m\" (UniqueName: \"kubernetes.io/projected/9450ec1b-239d-45ce-9747-a1b372326025-kube-api-access-z227m\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:12.168508 master-0 kubenswrapper[27819]: I0319 09:38:12.168504 27819 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9450ec1b-239d-45ce-9747-a1b372326025-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:12.168508 master-0 kubenswrapper[27819]: I0319 09:38:12.168514 27819 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:12.168508 master-0 kubenswrapper[27819]: I0319 09:38:12.168523 27819 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9450ec1b-239d-45ce-9747-a1b372326025-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:38:14.500316 master-0 kubenswrapper[27819]: I0319 09:38:14.497511 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-677b56bc7b-7lzf6"] Mar 19 09:38:14.500316 master-0 kubenswrapper[27819]: E0319 09:38:14.497893 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9450ec1b-239d-45ce-9747-a1b372326025" containerName="console" Mar 19 09:38:14.500316 master-0 kubenswrapper[27819]: I0319 09:38:14.497913 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="9450ec1b-239d-45ce-9747-a1b372326025" containerName="console" Mar 19 09:38:14.500316 master-0 kubenswrapper[27819]: E0319 09:38:14.497940 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83a3bcd-f590-4e36-846e-255494625539" containerName="oauth-openshift" Mar 19 09:38:14.500316 master-0 kubenswrapper[27819]: I0319 09:38:14.497951 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83a3bcd-f590-4e36-846e-255494625539" containerName="oauth-openshift" Mar 19 09:38:14.500316 master-0 kubenswrapper[27819]: I0319 09:38:14.498124 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="9450ec1b-239d-45ce-9747-a1b372326025" containerName="console" Mar 19 09:38:14.500316 master-0 kubenswrapper[27819]: I0319 09:38:14.498149 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83a3bcd-f590-4e36-846e-255494625539" containerName="oauth-openshift" Mar 19 09:38:14.508029 master-0 kubenswrapper[27819]: I0319 09:38:14.507975 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.521484 master-0 kubenswrapper[27819]: I0319 09:38:14.521440 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 19 09:38:14.521768 master-0 kubenswrapper[27819]: I0319 09:38:14.521715 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 19 09:38:14.521889 master-0 kubenswrapper[27819]: I0319 09:38:14.521872 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 19 09:38:14.521945 master-0 kubenswrapper[27819]: I0319 09:38:14.521938 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 19 09:38:14.522021 master-0 kubenswrapper[27819]: I0319 09:38:14.522006 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 19 09:38:14.522112 master-0 kubenswrapper[27819]: I0319 09:38:14.522097 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 19 09:38:14.522259 master-0 kubenswrapper[27819]: I0319 09:38:14.522242 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 19 09:38:14.522383 master-0 kubenswrapper[27819]: I0319 09:38:14.522368 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 19 09:38:14.523261 master-0 kubenswrapper[27819]: I0319 09:38:14.523215 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 19 09:38:14.523317 master-0 kubenswrapper[27819]: I0319 09:38:14.523302 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-fmdhh" Mar 19 09:38:14.523477 master-0 kubenswrapper[27819]: I0319 09:38:14.523450 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 19 09:38:14.523693 master-0 kubenswrapper[27819]: I0319 09:38:14.523569 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 19 09:38:14.531629 master-0 kubenswrapper[27819]: I0319 09:38:14.531535 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 19 09:38:14.538069 master-0 kubenswrapper[27819]: I0319 09:38:14.538025 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 19 09:38:14.572157 master-0 kubenswrapper[27819]: I0319 09:38:14.572083 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-9mnzh" Mar 19 09:38:14.602462 master-0 kubenswrapper[27819]: I0319 09:38:14.602399 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-login\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.602462 master-0 kubenswrapper[27819]: I0319 09:38:14.602465 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-audit-policies\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.602965 master-0 kubenswrapper[27819]: I0319 09:38:14.602655 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.602965 master-0 kubenswrapper[27819]: I0319 09:38:14.602705 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9qzp\" (UniqueName: \"kubernetes.io/projected/21525212-2b1a-40a4-bbd2-d4b000c1c363-kube-api-access-d9qzp\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.602965 master-0 kubenswrapper[27819]: I0319 09:38:14.602733 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-service-ca\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.602965 master-0 kubenswrapper[27819]: I0319 09:38:14.602772 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-router-certs\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.602965 master-0 kubenswrapper[27819]: I0319 09:38:14.602810 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-serving-cert\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.602965 master-0 kubenswrapper[27819]: I0319 09:38:14.602936 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.603329 master-0 kubenswrapper[27819]: I0319 09:38:14.602984 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21525212-2b1a-40a4-bbd2-d4b000c1c363-audit-dir\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.603329 master-0 kubenswrapper[27819]: I0319 09:38:14.603022 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-error\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.603329 master-0 kubenswrapper[27819]: I0319 09:38:14.603116 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-cliconfig\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.603329 master-0 kubenswrapper[27819]: I0319 09:38:14.603152 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.603329 master-0 kubenswrapper[27819]: I0319 09:38:14.603196 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-session\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.704802 master-0 kubenswrapper[27819]: I0319 09:38:14.704704 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-audit-policies\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705055 master-0 kubenswrapper[27819]: I0319 09:38:14.704848 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-login\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705055 master-0 kubenswrapper[27819]: I0319 09:38:14.704942 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705055 master-0 kubenswrapper[27819]: I0319 09:38:14.704976 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-service-ca\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705055 master-0 kubenswrapper[27819]: I0319 09:38:14.705007 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9qzp\" (UniqueName: \"kubernetes.io/projected/21525212-2b1a-40a4-bbd2-d4b000c1c363-kube-api-access-d9qzp\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705304 master-0 kubenswrapper[27819]: I0319 09:38:14.705230 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-router-certs\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705361 master-0 kubenswrapper[27819]: I0319 09:38:14.705341 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-serving-cert\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705421 master-0 kubenswrapper[27819]: I0319 09:38:14.705395 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705466 master-0 kubenswrapper[27819]: I0319 09:38:14.705438 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21525212-2b1a-40a4-bbd2-d4b000c1c363-audit-dir\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705509 master-0 kubenswrapper[27819]: I0319 09:38:14.705472 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-error\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705572 master-0 kubenswrapper[27819]: I0319 09:38:14.705528 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-cliconfig\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705622 master-0 kubenswrapper[27819]: I0319 09:38:14.705562 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/21525212-2b1a-40a4-bbd2-d4b000c1c363-audit-dir\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.705622 master-0 kubenswrapper[27819]: I0319 09:38:14.705578 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.706802 master-0 kubenswrapper[27819]: I0319 09:38:14.705650 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-session\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.706802 master-0 kubenswrapper[27819]: I0319 09:38:14.706010 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-audit-policies\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.706802 master-0 kubenswrapper[27819]: I0319 09:38:14.706103 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-service-ca\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.706802 master-0 kubenswrapper[27819]: I0319 09:38:14.706290 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-cliconfig\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.706802 master-0 kubenswrapper[27819]: I0319 09:38:14.706668 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.708964 master-0 kubenswrapper[27819]: I0319 09:38:14.708934 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.709806 master-0 kubenswrapper[27819]: I0319 09:38:14.709775 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-error\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.709998 master-0 kubenswrapper[27819]: I0319 09:38:14.709955 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-router-certs\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.710053 master-0 kubenswrapper[27819]: I0319 09:38:14.709973 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-serving-cert\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.710294 master-0 kubenswrapper[27819]: I0319 09:38:14.710260 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.712839 master-0 kubenswrapper[27819]: I0319 09:38:14.712801 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-user-template-login\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:14.714301 master-0 kubenswrapper[27819]: I0319 09:38:14.714268 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/21525212-2b1a-40a4-bbd2-d4b000c1c363-v4-0-config-system-session\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:15.293275 master-0 kubenswrapper[27819]: I0319 09:38:15.293199 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-648779ffbc-s842b"] Mar 19 09:38:16.186694 master-0 kubenswrapper[27819]: I0319 09:38:16.186636 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-677b56bc7b-7lzf6"] Mar 19 09:38:16.238536 master-0 kubenswrapper[27819]: I0319 09:38:16.238404 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-648779ffbc-s842b"] Mar 19 09:38:16.768192 master-0 kubenswrapper[27819]: I0319 09:38:16.768145 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9qzp\" (UniqueName: \"kubernetes.io/projected/21525212-2b1a-40a4-bbd2-d4b000c1c363-kube-api-access-d9qzp\") pod \"oauth-openshift-677b56bc7b-7lzf6\" (UID: \"21525212-2b1a-40a4-bbd2-d4b000c1c363\") " pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:16.937870 master-0 kubenswrapper[27819]: I0319 09:38:16.937813 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:17.156108 master-0 kubenswrapper[27819]: I0319 09:38:17.156044 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" event={"ID":"5fcb84d4-a7ab-4240-9638-b0e96d9df84f","Type":"ContainerStarted","Data":"96bb3addf794303917e37bd24c1a834ba9b363583e447057a41b4f3b9d299fa0"} Mar 19 09:38:17.293414 master-0 kubenswrapper[27819]: I0319 09:38:17.293240 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83a3bcd-f590-4e36-846e-255494625539" path="/var/lib/kubelet/pods/e83a3bcd-f590-4e36-846e-255494625539/volumes" Mar 19 09:38:20.223370 master-0 kubenswrapper[27819]: I0319 09:38:20.223326 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-677b56bc7b-7lzf6"] Mar 19 09:38:20.600122 master-0 kubenswrapper[27819]: I0319 09:38:20.598924 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5df74776c8-t52bf"] Mar 19 09:38:20.614609 master-0 kubenswrapper[27819]: I0319 09:38:20.610235 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5df74776c8-t52bf"] Mar 19 09:38:21.194042 master-0 kubenswrapper[27819]: I0319 09:38:21.193981 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" event={"ID":"5fcb84d4-a7ab-4240-9638-b0e96d9df84f","Type":"ContainerStarted","Data":"4fc480d7a079da9ae1a098a12d5f3bd541c3195e67f4b91f6340b3ae61ebeb28"} Mar 19 09:38:21.195735 master-0 kubenswrapper[27819]: I0319 09:38:21.195694 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" event={"ID":"21525212-2b1a-40a4-bbd2-d4b000c1c363","Type":"ContainerStarted","Data":"db4552a6bcaa664290a9b40142dbd01ae6e277365aff486d7723066836483b23"} Mar 19 09:38:21.195802 master-0 kubenswrapper[27819]: I0319 09:38:21.195747 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" event={"ID":"21525212-2b1a-40a4-bbd2-d4b000c1c363","Type":"ContainerStarted","Data":"9aaf349c5d5d3e7053efbee7ed3ab805c10670c7e4688006bde8a801ba414dd6"} Mar 19 09:38:21.196044 master-0 kubenswrapper[27819]: I0319 09:38:21.196018 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:21.198017 master-0 kubenswrapper[27819]: I0319 09:38:21.197997 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerStarted","Data":"5092c65841205e64d312726c612d61e3003ac79f15c312465f5000ef6b5a181a"} Mar 19 09:38:21.287526 master-0 kubenswrapper[27819]: I0319 09:38:21.287472 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9450ec1b-239d-45ce-9747-a1b372326025" path="/var/lib/kubelet/pods/9450ec1b-239d-45ce-9747-a1b372326025/volumes" Mar 19 09:38:21.717248 master-0 kubenswrapper[27819]: I0319 09:38:21.717182 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:21.717248 master-0 kubenswrapper[27819]: I0319 09:38:21.717234 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:22.196739 master-0 kubenswrapper[27819]: I0319 09:38:22.196609 27819 patch_prober.go:28] interesting pod/oauth-openshift-677b56bc7b-7lzf6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.99:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:38:22.196739 master-0 kubenswrapper[27819]: I0319 09:38:22.196686 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" podUID="21525212-2b1a-40a4-bbd2-d4b000c1c363" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.99:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 19 09:38:22.210089 master-0 kubenswrapper[27819]: I0319 09:38:22.210022 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" event={"ID":"5fcb84d4-a7ab-4240-9638-b0e96d9df84f","Type":"ContainerStarted","Data":"4f2d65d85adf79308b802c70c32a88b926e3d3b40b3f6323d310c22e93ee48c7"} Mar 19 09:38:22.213855 master-0 kubenswrapper[27819]: I0319 09:38:22.213786 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerStarted","Data":"8c57e544f9d3dc2f7f954aabdfb5b61615582bc7d7313ad2a156315ba34db613"} Mar 19 09:38:23.213997 master-0 kubenswrapper[27819]: I0319 09:38:23.213937 27819 patch_prober.go:28] interesting pod/oauth-openshift-677b56bc7b-7lzf6 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.99:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 19 09:38:23.214520 master-0 kubenswrapper[27819]: I0319 09:38:23.214014 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" podUID="21525212-2b1a-40a4-bbd2-d4b000c1c363" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.99:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:38:23.225475 master-0 kubenswrapper[27819]: I0319 09:38:23.225348 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerStarted","Data":"8dd5edb82e70bde069673cd128db6681797dd40812c6ad21a0a47c5575421740"} Mar 19 09:38:25.242415 master-0 kubenswrapper[27819]: I0319 09:38:25.242324 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerStarted","Data":"e6e49a4424825deaccdfbe303240685b1a2e297dc9560d72aadcb39af5e5a426"} Mar 19 09:38:26.266303 master-0 kubenswrapper[27819]: I0319 09:38:26.266202 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerStarted","Data":"90641d4a1badd01bff3e22e5a898da7a4cd1c0a011618cf28718a6b9e8999e5f"} Mar 19 09:38:26.963375 master-0 kubenswrapper[27819]: I0319 09:38:26.962852 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" Mar 19 09:38:26.997386 master-0 kubenswrapper[27819]: I0319 09:38:26.997133 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-677b56bc7b-7lzf6" podStartSLOduration=46.997115898 podStartE2EDuration="46.997115898s" podCreationTimestamp="2026-03-19 09:37:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:38:23.868971764 +0000 UTC m=+288.790549476" watchObservedRunningTime="2026-03-19 09:38:26.997115898 +0000 UTC m=+291.918693590" Mar 19 09:38:27.000268 master-0 kubenswrapper[27819]: I0319 09:38:26.999369 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 09:38:27.002588 master-0 kubenswrapper[27819]: I0319 09:38:27.002529 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.009084 master-0 kubenswrapper[27819]: I0319 09:38:27.006716 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 19 09:38:27.009084 master-0 kubenswrapper[27819]: I0319 09:38:27.007198 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 19 09:38:27.009084 master-0 kubenswrapper[27819]: I0319 09:38:27.007643 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 19 09:38:27.009084 master-0 kubenswrapper[27819]: I0319 09:38:27.007692 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 19 09:38:27.012626 master-0 kubenswrapper[27819]: I0319 09:38:27.011106 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 19 09:38:27.012626 master-0 kubenswrapper[27819]: I0319 09:38:27.011386 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 19 09:38:27.016616 master-0 kubenswrapper[27819]: I0319 09:38:27.016428 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 19 09:38:27.016903 master-0 kubenswrapper[27819]: I0319 09:38:27.016846 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 19 09:38:27.023579 master-0 kubenswrapper[27819]: I0319 09:38:27.019934 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-fcgtdcit2e0md" Mar 19 09:38:27.023579 master-0 kubenswrapper[27819]: I0319 09:38:27.020505 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 19 09:38:27.024196 master-0 kubenswrapper[27819]: I0319 09:38:27.024146 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 19 09:38:27.031412 master-0 kubenswrapper[27819]: I0319 09:38:27.031220 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 19 09:38:27.081593 master-0 kubenswrapper[27819]: I0319 09:38:27.076538 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 09:38:27.170591 master-0 kubenswrapper[27819]: I0319 09:38:27.170535 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.170841 master-0 kubenswrapper[27819]: I0319 09:38:27.170821 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.170949 master-0 kubenswrapper[27819]: I0319 09:38:27.170931 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171045 master-0 kubenswrapper[27819]: I0319 09:38:27.171028 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171140 master-0 kubenswrapper[27819]: I0319 09:38:27.171125 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171253 master-0 kubenswrapper[27819]: I0319 09:38:27.171236 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171350 master-0 kubenswrapper[27819]: I0319 09:38:27.171332 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171447 master-0 kubenswrapper[27819]: I0319 09:38:27.171431 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171561 master-0 kubenswrapper[27819]: I0319 09:38:27.171527 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171695 master-0 kubenswrapper[27819]: I0319 09:38:27.171676 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568br\" (UniqueName: \"kubernetes.io/projected/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-kube-api-access-568br\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171838 master-0 kubenswrapper[27819]: I0319 09:38:27.171821 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.171947 master-0 kubenswrapper[27819]: I0319 09:38:27.171929 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.172083 master-0 kubenswrapper[27819]: I0319 09:38:27.172068 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-config\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.172203 master-0 kubenswrapper[27819]: I0319 09:38:27.172187 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.172314 master-0 kubenswrapper[27819]: I0319 09:38:27.172299 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.172404 master-0 kubenswrapper[27819]: I0319 09:38:27.172390 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.172514 master-0 kubenswrapper[27819]: I0319 09:38:27.172496 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.172642 master-0 kubenswrapper[27819]: I0319 09:38:27.172624 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.273507 master-0 kubenswrapper[27819]: I0319 09:38:27.273452 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.273507 master-0 kubenswrapper[27819]: I0319 09:38:27.273505 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273552 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273588 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273604 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273626 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273650 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273690 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273714 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273735 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273776 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273798 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273820 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273843 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273880 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-568br\" (UniqueName: \"kubernetes.io/projected/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-kube-api-access-568br\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273901 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273931 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.274425 master-0 kubenswrapper[27819]: I0319 09:38:27.273960 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-config\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.275015 master-0 kubenswrapper[27819]: I0319 09:38:27.274983 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.276189 master-0 kubenswrapper[27819]: I0319 09:38:27.276154 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.283007 master-0 kubenswrapper[27819]: I0319 09:38:27.279986 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.283007 master-0 kubenswrapper[27819]: I0319 09:38:27.280138 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-config\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.283007 master-0 kubenswrapper[27819]: I0319 09:38:27.280844 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.283007 master-0 kubenswrapper[27819]: I0319 09:38:27.281081 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.283007 master-0 kubenswrapper[27819]: I0319 09:38:27.282181 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.289644 master-0 kubenswrapper[27819]: I0319 09:38:27.285196 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.298576 master-0 kubenswrapper[27819]: I0319 09:38:27.293669 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.298576 master-0 kubenswrapper[27819]: I0319 09:38:27.295437 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.298576 master-0 kubenswrapper[27819]: I0319 09:38:27.295970 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.298576 master-0 kubenswrapper[27819]: I0319 09:38:27.296159 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.298576 master-0 kubenswrapper[27819]: I0319 09:38:27.296244 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.298576 master-0 kubenswrapper[27819]: I0319 09:38:27.296323 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.298576 master-0 kubenswrapper[27819]: I0319 09:38:27.296755 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.301604 master-0 kubenswrapper[27819]: I0319 09:38:27.299367 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.304164 master-0 kubenswrapper[27819]: I0319 09:38:27.304006 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.313729 master-0 kubenswrapper[27819]: I0319 09:38:27.313684 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"4674b10a-f1c1-4bc6-a366-6ecbaff1977e","Type":"ContainerStarted","Data":"67c5fc32351f268a9e8db109209fe4255f6c2d8614d715ce06b939a53354f8fa"} Mar 19 09:38:27.315323 master-0 kubenswrapper[27819]: I0319 09:38:27.315300 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-568br\" (UniqueName: \"kubernetes.io/projected/6e7d084b-b42a-408e-9919-ef41dc2d0d8c-kube-api-access-568br\") pod \"prometheus-k8s-0\" (UID: \"6e7d084b-b42a-408e-9919-ef41dc2d0d8c\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.339048 master-0 kubenswrapper[27819]: I0319 09:38:27.338997 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" event={"ID":"5fcb84d4-a7ab-4240-9638-b0e96d9df84f","Type":"ContainerStarted","Data":"9c6c21aa74fdb133dbfdc6c0a7889e1c26ba255681d9e541489db1885be6c600"} Mar 19 09:38:27.358217 master-0 kubenswrapper[27819]: I0319 09:38:27.358121 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=8.500426325 podStartE2EDuration="33.358104373s" podCreationTimestamp="2026-03-19 09:37:54 +0000 UTC" firstStartedPulling="2026-03-19 09:38:02.013241578 +0000 UTC m=+266.934819270" lastFinishedPulling="2026-03-19 09:38:26.870919626 +0000 UTC m=+291.792497318" observedRunningTime="2026-03-19 09:38:27.353444827 +0000 UTC m=+292.275022529" watchObservedRunningTime="2026-03-19 09:38:27.358104373 +0000 UTC m=+292.279682065" Mar 19 09:38:27.388815 master-0 kubenswrapper[27819]: I0319 09:38:27.388745 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:27.822672 master-0 kubenswrapper[27819]: I0319 09:38:27.822627 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 19 09:38:27.824702 master-0 kubenswrapper[27819]: W0319 09:38:27.824638 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e7d084b_b42a_408e_9919_ef41dc2d0d8c.slice/crio-838b5745e0bc6b04bf79a850c0ead37925dd00c3ee4125a36fa0d1d5fe7a4841 WatchSource:0}: Error finding container 838b5745e0bc6b04bf79a850c0ead37925dd00c3ee4125a36fa0d1d5fe7a4841: Status 404 returned error can't find the container with id 838b5745e0bc6b04bf79a850c0ead37925dd00c3ee4125a36fa0d1d5fe7a4841 Mar 19 09:38:28.363185 master-0 kubenswrapper[27819]: I0319 09:38:28.363072 27819 generic.go:334] "Generic (PLEG): container finished" podID="6e7d084b-b42a-408e-9919-ef41dc2d0d8c" containerID="8f91d47288ba9b096519ea2c9c409f308e032a90e4f441bb7641685255136e00" exitCode=0 Mar 19 09:38:28.363185 master-0 kubenswrapper[27819]: I0319 09:38:28.363150 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerDied","Data":"8f91d47288ba9b096519ea2c9c409f308e032a90e4f441bb7641685255136e00"} Mar 19 09:38:28.363185 master-0 kubenswrapper[27819]: I0319 09:38:28.363179 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerStarted","Data":"838b5745e0bc6b04bf79a850c0ead37925dd00c3ee4125a36fa0d1d5fe7a4841"} Mar 19 09:38:28.367602 master-0 kubenswrapper[27819]: I0319 09:38:28.367567 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" event={"ID":"5fcb84d4-a7ab-4240-9638-b0e96d9df84f","Type":"ContainerStarted","Data":"3fe79fd3d20b879be94bffc755cb0f089b94e7b415a8693df3cef548102f7b7f"} Mar 19 09:38:28.367602 master-0 kubenswrapper[27819]: I0319 09:38:28.367603 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:38:28.367730 master-0 kubenswrapper[27819]: I0319 09:38:28.367613 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" event={"ID":"5fcb84d4-a7ab-4240-9638-b0e96d9df84f","Type":"ContainerStarted","Data":"7eba0a8dba8b8c783bf16ae0bf84eb09aa56816e5b546ded1bcbbb3cd2410209"} Mar 19 09:38:28.385228 master-0 kubenswrapper[27819]: I0319 09:38:28.385173 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" Mar 19 09:38:28.431995 master-0 kubenswrapper[27819]: I0319 09:38:28.431876 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-6f854479d9-fwtt4" podStartSLOduration=7.822380966 podStartE2EDuration="33.431855469s" podCreationTimestamp="2026-03-19 09:37:55 +0000 UTC" firstStartedPulling="2026-03-19 09:38:01.259605034 +0000 UTC m=+266.181182726" lastFinishedPulling="2026-03-19 09:38:26.869079537 +0000 UTC m=+291.790657229" observedRunningTime="2026-03-19 09:38:28.430857531 +0000 UTC m=+293.352435233" watchObservedRunningTime="2026-03-19 09:38:28.431855469 +0000 UTC m=+293.353433161" Mar 19 09:38:33.931031 master-0 kubenswrapper[27819]: I0319 09:38:33.930977 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerStarted","Data":"5b14c2cc7547c3a6c65d73028cc666a900788968c08fbde8153d77a39c7d2b44"} Mar 19 09:38:34.946439 master-0 kubenswrapper[27819]: I0319 09:38:34.946397 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerStarted","Data":"dd5f704beb34d8f8acd56afe9d98351261ebb89250f548769f9a6e1093b02484"} Mar 19 09:38:34.946439 master-0 kubenswrapper[27819]: I0319 09:38:34.946442 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerStarted","Data":"ca0d38e1fc2deb7115790c87ad062af51766467a749da2a32201cd614116a871"} Mar 19 09:38:34.946960 master-0 kubenswrapper[27819]: I0319 09:38:34.946454 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerStarted","Data":"50e1237f7715f00cdcdc2af2795dcc30d02afe1e1bb0f9bef3885b2259934d05"} Mar 19 09:38:34.946960 master-0 kubenswrapper[27819]: I0319 09:38:34.946463 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerStarted","Data":"34b8c4ea601efb0791722464d9bd4e0a7443664f291abf7c5bde5095dd9861d3"} Mar 19 09:38:34.946960 master-0 kubenswrapper[27819]: I0319 09:38:34.946471 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6e7d084b-b42a-408e-9919-ef41dc2d0d8c","Type":"ContainerStarted","Data":"2a1b705e60c1f3fc36ee9ba7ac873444bccb01189d6913a5e42c35092dcb2c88"} Mar 19 09:38:34.980348 master-0 kubenswrapper[27819]: I0319 09:38:34.980263 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.754160046 podStartE2EDuration="8.980244483s" podCreationTimestamp="2026-03-19 09:38:26 +0000 UTC" firstStartedPulling="2026-03-19 09:38:28.364808486 +0000 UTC m=+293.286386178" lastFinishedPulling="2026-03-19 09:38:33.590892923 +0000 UTC m=+298.512470615" observedRunningTime="2026-03-19 09:38:34.975952257 +0000 UTC m=+299.897529949" watchObservedRunningTime="2026-03-19 09:38:34.980244483 +0000 UTC m=+299.901822175" Mar 19 09:38:35.277589 master-0 kubenswrapper[27819]: I0319 09:38:35.277148 27819 kubelet.go:1505] "Image garbage collection succeeded" Mar 19 09:38:37.389681 master-0 kubenswrapper[27819]: I0319 09:38:37.389621 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:38:41.723224 master-0 kubenswrapper[27819]: I0319 09:38:41.723151 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:38:41.727370 master-0 kubenswrapper[27819]: I0319 09:38:41.727325 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7679cd5c8b-nf76b" Mar 19 09:39:27.389838 master-0 kubenswrapper[27819]: I0319 09:39:27.389749 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:39:27.424757 master-0 kubenswrapper[27819]: I0319 09:39:27.424712 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:39:28.387301 master-0 kubenswrapper[27819]: I0319 09:39:28.387225 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 19 09:39:35.545135 master-0 kubenswrapper[27819]: I0319 09:39:35.545066 27819 scope.go:117] "RemoveContainer" containerID="6ef69a9aa568c569e28a8cf9a8398ecd1d39a543a999398bc8742b280aa881bd" Mar 19 09:39:44.013279 master-0 kubenswrapper[27819]: I0319 09:39:44.013183 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 19 09:39:44.014376 master-0 kubenswrapper[27819]: I0319 09:39:44.014085 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.017464 master-0 kubenswrapper[27819]: I0319 09:39:44.017401 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cnb44" Mar 19 09:39:44.018713 master-0 kubenswrapper[27819]: I0319 09:39:44.018615 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 09:39:44.035674 master-0 kubenswrapper[27819]: I0319 09:39:44.035607 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 19 09:39:44.150055 master-0 kubenswrapper[27819]: I0319 09:39:44.149989 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-var-lock\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.150298 master-0 kubenswrapper[27819]: I0319 09:39:44.150078 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ee88483-d819-42ea-ad81-b4e3f12f226a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.150298 master-0 kubenswrapper[27819]: I0319 09:39:44.150138 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.252078 master-0 kubenswrapper[27819]: I0319 09:39:44.251929 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-var-lock\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.252078 master-0 kubenswrapper[27819]: I0319 09:39:44.252076 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ee88483-d819-42ea-ad81-b4e3f12f226a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.252390 master-0 kubenswrapper[27819]: I0319 09:39:44.252108 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.252390 master-0 kubenswrapper[27819]: I0319 09:39:44.252017 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-var-lock\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.252581 master-0 kubenswrapper[27819]: I0319 09:39:44.252518 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.267669 master-0 kubenswrapper[27819]: I0319 09:39:44.267521 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ee88483-d819-42ea-ad81-b4e3f12f226a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.335582 master-0 kubenswrapper[27819]: I0319 09:39:44.335467 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:39:44.758808 master-0 kubenswrapper[27819]: I0319 09:39:44.758735 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 19 09:39:45.476051 master-0 kubenswrapper[27819]: I0319 09:39:45.475970 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"5ee88483-d819-42ea-ad81-b4e3f12f226a","Type":"ContainerStarted","Data":"ee272c73577938af4d4e52d6d6676e0e865a3d3cd2b195a33105d4f3123942a4"} Mar 19 09:39:45.476051 master-0 kubenswrapper[27819]: I0319 09:39:45.476040 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"5ee88483-d819-42ea-ad81-b4e3f12f226a","Type":"ContainerStarted","Data":"2afe2e8260d475324c12903de93db84f0ed93a039ca6dbfba8a707557c42db5f"} Mar 19 09:39:45.492103 master-0 kubenswrapper[27819]: I0319 09:39:45.491969 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-0" podStartSLOduration=1.491945509 podStartE2EDuration="1.491945509s" podCreationTimestamp="2026-03-19 09:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:39:45.490560242 +0000 UTC m=+370.412137934" watchObservedRunningTime="2026-03-19 09:39:45.491945509 +0000 UTC m=+370.413523221" Mar 19 09:39:50.325577 master-0 kubenswrapper[27819]: I0319 09:39:50.322246 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bcfd95575-qmxs7"] Mar 19 09:39:50.325577 master-0 kubenswrapper[27819]: I0319 09:39:50.323463 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.358181 master-0 kubenswrapper[27819]: I0319 09:39:50.358106 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bcfd95575-qmxs7"] Mar 19 09:39:50.445190 master-0 kubenswrapper[27819]: I0319 09:39:50.445133 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-service-ca\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.445190 master-0 kubenswrapper[27819]: I0319 09:39:50.445186 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwz6q\" (UniqueName: \"kubernetes.io/projected/306638cd-9ae4-48a3-a7d7-7e3b935df93f-kube-api-access-nwz6q\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.445503 master-0 kubenswrapper[27819]: I0319 09:39:50.445213 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-config\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.445503 master-0 kubenswrapper[27819]: I0319 09:39:50.445230 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-oauth-serving-cert\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.445503 master-0 kubenswrapper[27819]: I0319 09:39:50.445260 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-serving-cert\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.445503 master-0 kubenswrapper[27819]: I0319 09:39:50.445282 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-oauth-config\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.445503 master-0 kubenswrapper[27819]: I0319 09:39:50.445306 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-trusted-ca-bundle\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.546392 master-0 kubenswrapper[27819]: I0319 09:39:50.546343 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-serving-cert\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.546392 master-0 kubenswrapper[27819]: I0319 09:39:50.546393 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-oauth-config\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.546645 master-0 kubenswrapper[27819]: I0319 09:39:50.546420 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-trusted-ca-bundle\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.546645 master-0 kubenswrapper[27819]: I0319 09:39:50.546596 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-service-ca\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.546721 master-0 kubenswrapper[27819]: I0319 09:39:50.546678 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwz6q\" (UniqueName: \"kubernetes.io/projected/306638cd-9ae4-48a3-a7d7-7e3b935df93f-kube-api-access-nwz6q\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.546759 master-0 kubenswrapper[27819]: I0319 09:39:50.546723 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-config\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.546759 master-0 kubenswrapper[27819]: I0319 09:39:50.546751 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-oauth-serving-cert\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.547554 master-0 kubenswrapper[27819]: I0319 09:39:50.547516 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-service-ca\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.547727 master-0 kubenswrapper[27819]: I0319 09:39:50.547678 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-config\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.548016 master-0 kubenswrapper[27819]: I0319 09:39:50.547982 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-oauth-serving-cert\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.548016 master-0 kubenswrapper[27819]: I0319 09:39:50.548002 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-trusted-ca-bundle\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.552980 master-0 kubenswrapper[27819]: I0319 09:39:50.552937 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-oauth-config\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.553075 master-0 kubenswrapper[27819]: I0319 09:39:50.553041 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-serving-cert\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.567454 master-0 kubenswrapper[27819]: I0319 09:39:50.567398 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwz6q\" (UniqueName: \"kubernetes.io/projected/306638cd-9ae4-48a3-a7d7-7e3b935df93f-kube-api-access-nwz6q\") pod \"console-6bcfd95575-qmxs7\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:50.660122 master-0 kubenswrapper[27819]: I0319 09:39:50.660048 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:39:51.066264 master-0 kubenswrapper[27819]: I0319 09:39:51.066207 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bcfd95575-qmxs7"] Mar 19 09:39:51.068577 master-0 kubenswrapper[27819]: W0319 09:39:51.068517 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod306638cd_9ae4_48a3_a7d7_7e3b935df93f.slice/crio-47a0a22edb6a1a7570f9a898a3cefc844beff97d37fc8480116127fb9efa9655 WatchSource:0}: Error finding container 47a0a22edb6a1a7570f9a898a3cefc844beff97d37fc8480116127fb9efa9655: Status 404 returned error can't find the container with id 47a0a22edb6a1a7570f9a898a3cefc844beff97d37fc8480116127fb9efa9655 Mar 19 09:39:51.515689 master-0 kubenswrapper[27819]: I0319 09:39:51.515631 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcfd95575-qmxs7" event={"ID":"306638cd-9ae4-48a3-a7d7-7e3b935df93f","Type":"ContainerStarted","Data":"e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354"} Mar 19 09:39:51.515689 master-0 kubenswrapper[27819]: I0319 09:39:51.515678 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcfd95575-qmxs7" event={"ID":"306638cd-9ae4-48a3-a7d7-7e3b935df93f","Type":"ContainerStarted","Data":"47a0a22edb6a1a7570f9a898a3cefc844beff97d37fc8480116127fb9efa9655"} Mar 19 09:40:00.661291 master-0 kubenswrapper[27819]: I0319 09:40:00.661039 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:40:00.661291 master-0 kubenswrapper[27819]: I0319 09:40:00.661115 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:40:00.665702 master-0 kubenswrapper[27819]: I0319 09:40:00.665639 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:40:00.682630 master-0 kubenswrapper[27819]: I0319 09:40:00.682520 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bcfd95575-qmxs7" podStartSLOduration=10.682502353 podStartE2EDuration="10.682502353s" podCreationTimestamp="2026-03-19 09:39:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:39:51.537165149 +0000 UTC m=+376.458742851" watchObservedRunningTime="2026-03-19 09:40:00.682502353 +0000 UTC m=+385.604080045" Mar 19 09:40:01.597760 master-0 kubenswrapper[27819]: I0319 09:40:01.597683 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:40:01.681915 master-0 kubenswrapper[27819]: I0319 09:40:01.680818 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d97df8cb5-9hrl2"] Mar 19 09:40:17.749961 master-0 kubenswrapper[27819]: I0319 09:40:17.749921 27819 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:40:17.750969 master-0 kubenswrapper[27819]: I0319 09:40:17.750898 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://59bfbc24aed025cbeb33d0e5a40c5d7418d9f9aec04c5fd5b96dbe02fab0ba33" gracePeriod=30 Mar 19 09:40:17.751050 master-0 kubenswrapper[27819]: I0319 09:40:17.750928 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://81557cf106fbd5f4a3b2964beaaeaf69341eb9b15abccbc6d3aef5351309e1d6" gracePeriod=30 Mar 19 09:40:17.751050 master-0 kubenswrapper[27819]: I0319 09:40:17.750914 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" containerID="cri-o://f3ffe4fec33c46fff754b84bc96e8c84dff07f2714439153ed5a5e81bfd1df38" gracePeriod=30 Mar 19 09:40:17.751127 master-0 kubenswrapper[27819]: I0319 09:40:17.750845 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="cluster-policy-controller" containerID="cri-o://74b087b8a1f11417cfbc6b3012b38420ffa8a4dbed87e2e5a22cd51bf2974639" gracePeriod=30 Mar 19 09:40:17.752213 master-0 kubenswrapper[27819]: I0319 09:40:17.752180 27819 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:40:17.752500 master-0 kubenswrapper[27819]: E0319 09:40:17.752479 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.752500 master-0 kubenswrapper[27819]: I0319 09:40:17.752497 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.752608 master-0 kubenswrapper[27819]: E0319 09:40:17.752516 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.752608 master-0 kubenswrapper[27819]: I0319 09:40:17.752522 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.752608 master-0 kubenswrapper[27819]: E0319 09:40:17.752577 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-cert-syncer" Mar 19 09:40:17.752608 master-0 kubenswrapper[27819]: I0319 09:40:17.752584 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-cert-syncer" Mar 19 09:40:17.752608 master-0 kubenswrapper[27819]: E0319 09:40:17.752591 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="cluster-policy-controller" Mar 19 09:40:17.752608 master-0 kubenswrapper[27819]: I0319 09:40:17.752599 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="cluster-policy-controller" Mar 19 09:40:17.752608 master-0 kubenswrapper[27819]: E0319 09:40:17.752608 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.752813 master-0 kubenswrapper[27819]: I0319 09:40:17.752615 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.752813 master-0 kubenswrapper[27819]: E0319 09:40:17.752661 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-recovery-controller" Mar 19 09:40:17.752813 master-0 kubenswrapper[27819]: I0319 09:40:17.752669 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-recovery-controller" Mar 19 09:40:17.752813 master-0 kubenswrapper[27819]: I0319 09:40:17.752786 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="cluster-policy-controller" Mar 19 09:40:17.752813 master-0 kubenswrapper[27819]: I0319 09:40:17.752807 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.752813 master-0 kubenswrapper[27819]: I0319 09:40:17.752815 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.753029 master-0 kubenswrapper[27819]: I0319 09:40:17.752822 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-recovery-controller" Mar 19 09:40:17.753029 master-0 kubenswrapper[27819]: I0319 09:40:17.752830 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager" Mar 19 09:40:17.753029 master-0 kubenswrapper[27819]: I0319 09:40:17.752843 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3939b09ae7c21557b3dd5ab01349318" containerName="kube-controller-manager-cert-syncer" Mar 19 09:40:17.880758 master-0 kubenswrapper[27819]: I0319 09:40:17.880560 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/01ee49b6be31c6b1924d1c0337571f4b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"01ee49b6be31c6b1924d1c0337571f4b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:17.880758 master-0 kubenswrapper[27819]: I0319 09:40:17.880697 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/01ee49b6be31c6b1924d1c0337571f4b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"01ee49b6be31c6b1924d1c0337571f4b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:17.925199 master-0 kubenswrapper[27819]: E0319 09:40:17.925154 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3939b09ae7c21557b3dd5ab01349318.slice/crio-81557cf106fbd5f4a3b2964beaaeaf69341eb9b15abccbc6d3aef5351309e1d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3939b09ae7c21557b3dd5ab01349318.slice/crio-conmon-f3ffe4fec33c46fff754b84bc96e8c84dff07f2714439153ed5a5e81bfd1df38.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3939b09ae7c21557b3dd5ab01349318.slice/crio-conmon-81557cf106fbd5f4a3b2964beaaeaf69341eb9b15abccbc6d3aef5351309e1d6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3939b09ae7c21557b3dd5ab01349318.slice/crio-conmon-59bfbc24aed025cbeb33d0e5a40c5d7418d9f9aec04c5fd5b96dbe02fab0ba33.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:40:17.981556 master-0 kubenswrapper[27819]: I0319 09:40:17.981485 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/01ee49b6be31c6b1924d1c0337571f4b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"01ee49b6be31c6b1924d1c0337571f4b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:17.981830 master-0 kubenswrapper[27819]: I0319 09:40:17.981637 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/01ee49b6be31c6b1924d1c0337571f4b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"01ee49b6be31c6b1924d1c0337571f4b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:17.981830 master-0 kubenswrapper[27819]: I0319 09:40:17.981728 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/01ee49b6be31c6b1924d1c0337571f4b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"01ee49b6be31c6b1924d1c0337571f4b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:17.981830 master-0 kubenswrapper[27819]: I0319 09:40:17.981750 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/01ee49b6be31c6b1924d1c0337571f4b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"01ee49b6be31c6b1924d1c0337571f4b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:17.986841 master-0 kubenswrapper[27819]: I0319 09:40:17.986772 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager/1.log" Mar 19 09:40:17.987805 master-0 kubenswrapper[27819]: I0319 09:40:17.987764 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager-cert-syncer/0.log" Mar 19 09:40:17.988247 master-0 kubenswrapper[27819]: I0319 09:40:17.988219 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:17.991640 master-0 kubenswrapper[27819]: I0319 09:40:17.991595 27819 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="d3939b09ae7c21557b3dd5ab01349318" podUID="01ee49b6be31c6b1924d1c0337571f4b" Mar 19 09:40:18.082614 master-0 kubenswrapper[27819]: I0319 09:40:18.082494 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") pod \"d3939b09ae7c21557b3dd5ab01349318\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " Mar 19 09:40:18.082783 master-0 kubenswrapper[27819]: I0319 09:40:18.082630 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") pod \"d3939b09ae7c21557b3dd5ab01349318\" (UID: \"d3939b09ae7c21557b3dd5ab01349318\") " Mar 19 09:40:18.082783 master-0 kubenswrapper[27819]: I0319 09:40:18.082626 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "d3939b09ae7c21557b3dd5ab01349318" (UID: "d3939b09ae7c21557b3dd5ab01349318"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:40:18.082856 master-0 kubenswrapper[27819]: I0319 09:40:18.082671 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "d3939b09ae7c21557b3dd5ab01349318" (UID: "d3939b09ae7c21557b3dd5ab01349318"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:40:18.082952 master-0 kubenswrapper[27819]: I0319 09:40:18.082926 27819 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:18.082952 master-0 kubenswrapper[27819]: I0319 09:40:18.082947 27819 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d3939b09ae7c21557b3dd5ab01349318-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:18.731995 master-0 kubenswrapper[27819]: I0319 09:40:18.731925 27819 generic.go:334] "Generic (PLEG): container finished" podID="5ee88483-d819-42ea-ad81-b4e3f12f226a" containerID="ee272c73577938af4d4e52d6d6676e0e865a3d3cd2b195a33105d4f3123942a4" exitCode=0 Mar 19 09:40:18.732436 master-0 kubenswrapper[27819]: I0319 09:40:18.732001 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"5ee88483-d819-42ea-ad81-b4e3f12f226a","Type":"ContainerDied","Data":"ee272c73577938af4d4e52d6d6676e0e865a3d3cd2b195a33105d4f3123942a4"} Mar 19 09:40:18.733814 master-0 kubenswrapper[27819]: I0319 09:40:18.733793 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager/1.log" Mar 19 09:40:18.734353 master-0 kubenswrapper[27819]: I0319 09:40:18.734325 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager-cert-syncer/0.log" Mar 19 09:40:18.734727 master-0 kubenswrapper[27819]: I0319 09:40:18.734535 27819 generic.go:334] "Generic (PLEG): container finished" podID="d3939b09ae7c21557b3dd5ab01349318" containerID="f3ffe4fec33c46fff754b84bc96e8c84dff07f2714439153ed5a5e81bfd1df38" exitCode=0 Mar 19 09:40:18.734727 master-0 kubenswrapper[27819]: I0319 09:40:18.734568 27819 generic.go:334] "Generic (PLEG): container finished" podID="d3939b09ae7c21557b3dd5ab01349318" containerID="59bfbc24aed025cbeb33d0e5a40c5d7418d9f9aec04c5fd5b96dbe02fab0ba33" exitCode=0 Mar 19 09:40:18.734727 master-0 kubenswrapper[27819]: I0319 09:40:18.734576 27819 generic.go:334] "Generic (PLEG): container finished" podID="d3939b09ae7c21557b3dd5ab01349318" containerID="81557cf106fbd5f4a3b2964beaaeaf69341eb9b15abccbc6d3aef5351309e1d6" exitCode=2 Mar 19 09:40:18.734727 master-0 kubenswrapper[27819]: I0319 09:40:18.734585 27819 generic.go:334] "Generic (PLEG): container finished" podID="d3939b09ae7c21557b3dd5ab01349318" containerID="74b087b8a1f11417cfbc6b3012b38420ffa8a4dbed87e2e5a22cd51bf2974639" exitCode=0 Mar 19 09:40:18.734727 master-0 kubenswrapper[27819]: I0319 09:40:18.734610 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f39d5a946115d9aa2743e9655b3338b055f600620261ca0fa9e3a2d4b1e5b19b" Mar 19 09:40:18.734727 master-0 kubenswrapper[27819]: I0319 09:40:18.734623 27819 scope.go:117] "RemoveContainer" containerID="5be498e28584ea542ce41a8bc159a7ded439c5a053defc650ccce7fc0d099fa0" Mar 19 09:40:18.734727 master-0 kubenswrapper[27819]: I0319 09:40:18.734723 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:18.763312 master-0 kubenswrapper[27819]: I0319 09:40:18.763200 27819 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="d3939b09ae7c21557b3dd5ab01349318" podUID="01ee49b6be31c6b1924d1c0337571f4b" Mar 19 09:40:19.291804 master-0 kubenswrapper[27819]: I0319 09:40:19.291721 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3939b09ae7c21557b3dd5ab01349318" path="/var/lib/kubelet/pods/d3939b09ae7c21557b3dd5ab01349318/volumes" Mar 19 09:40:19.743669 master-0 kubenswrapper[27819]: I0319 09:40:19.743469 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d3939b09ae7c21557b3dd5ab01349318/kube-controller-manager-cert-syncer/0.log" Mar 19 09:40:20.025431 master-0 kubenswrapper[27819]: I0319 09:40:20.025116 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:40:20.116624 master-0 kubenswrapper[27819]: I0319 09:40:20.116127 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ee88483-d819-42ea-ad81-b4e3f12f226a-kube-api-access\") pod \"5ee88483-d819-42ea-ad81-b4e3f12f226a\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " Mar 19 09:40:20.116624 master-0 kubenswrapper[27819]: I0319 09:40:20.116252 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-kubelet-dir\") pod \"5ee88483-d819-42ea-ad81-b4e3f12f226a\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " Mar 19 09:40:20.116624 master-0 kubenswrapper[27819]: I0319 09:40:20.116286 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-var-lock\") pod \"5ee88483-d819-42ea-ad81-b4e3f12f226a\" (UID: \"5ee88483-d819-42ea-ad81-b4e3f12f226a\") " Mar 19 09:40:20.116624 master-0 kubenswrapper[27819]: I0319 09:40:20.116509 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ee88483-d819-42ea-ad81-b4e3f12f226a" (UID: "5ee88483-d819-42ea-ad81-b4e3f12f226a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:40:20.116624 master-0 kubenswrapper[27819]: I0319 09:40:20.116568 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-var-lock" (OuterVolumeSpecName: "var-lock") pod "5ee88483-d819-42ea-ad81-b4e3f12f226a" (UID: "5ee88483-d819-42ea-ad81-b4e3f12f226a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:40:20.117124 master-0 kubenswrapper[27819]: I0319 09:40:20.116812 27819 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:20.117124 master-0 kubenswrapper[27819]: I0319 09:40:20.116827 27819 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ee88483-d819-42ea-ad81-b4e3f12f226a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:20.118697 master-0 kubenswrapper[27819]: I0319 09:40:20.118657 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee88483-d819-42ea-ad81-b4e3f12f226a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ee88483-d819-42ea-ad81-b4e3f12f226a" (UID: "5ee88483-d819-42ea-ad81-b4e3f12f226a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:40:20.218685 master-0 kubenswrapper[27819]: I0319 09:40:20.218635 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ee88483-d819-42ea-ad81-b4e3f12f226a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:20.755415 master-0 kubenswrapper[27819]: I0319 09:40:20.755350 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"5ee88483-d819-42ea-ad81-b4e3f12f226a","Type":"ContainerDied","Data":"2afe2e8260d475324c12903de93db84f0ed93a039ca6dbfba8a707557c42db5f"} Mar 19 09:40:20.755415 master-0 kubenswrapper[27819]: I0319 09:40:20.755398 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2afe2e8260d475324c12903de93db84f0ed93a039ca6dbfba8a707557c42db5f" Mar 19 09:40:20.755706 master-0 kubenswrapper[27819]: I0319 09:40:20.755470 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 19 09:40:26.772843 master-0 kubenswrapper[27819]: I0319 09:40:26.772745 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d97df8cb5-9hrl2" podUID="25a9e5e1-e5d5-457d-8c54-8b58dca34985" containerName="console" containerID="cri-o://2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647" gracePeriod=15 Mar 19 09:40:27.175860 master-0 kubenswrapper[27819]: I0319 09:40:27.175817 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d97df8cb5-9hrl2_25a9e5e1-e5d5-457d-8c54-8b58dca34985/console/0.log" Mar 19 09:40:27.176084 master-0 kubenswrapper[27819]: I0319 09:40:27.175887 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:40:27.226149 master-0 kubenswrapper[27819]: I0319 09:40:27.226095 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cplnx\" (UniqueName: \"kubernetes.io/projected/25a9e5e1-e5d5-457d-8c54-8b58dca34985-kube-api-access-cplnx\") pod \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " Mar 19 09:40:27.226358 master-0 kubenswrapper[27819]: I0319 09:40:27.226230 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-config\") pod \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " Mar 19 09:40:27.226358 master-0 kubenswrapper[27819]: I0319 09:40:27.226271 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-service-ca\") pod \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " Mar 19 09:40:27.226358 master-0 kubenswrapper[27819]: I0319 09:40:27.226294 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-oauth-config\") pod \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " Mar 19 09:40:27.226680 master-0 kubenswrapper[27819]: I0319 09:40:27.226626 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-serving-cert\") pod \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " Mar 19 09:40:27.227285 master-0 kubenswrapper[27819]: I0319 09:40:27.226852 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-config" (OuterVolumeSpecName: "console-config") pod "25a9e5e1-e5d5-457d-8c54-8b58dca34985" (UID: "25a9e5e1-e5d5-457d-8c54-8b58dca34985"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:40:27.227285 master-0 kubenswrapper[27819]: I0319 09:40:27.226898 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-trusted-ca-bundle\") pod \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " Mar 19 09:40:27.227285 master-0 kubenswrapper[27819]: I0319 09:40:27.227025 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-oauth-serving-cert\") pod \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\" (UID: \"25a9e5e1-e5d5-457d-8c54-8b58dca34985\") " Mar 19 09:40:27.227285 master-0 kubenswrapper[27819]: I0319 09:40:27.227138 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-service-ca" (OuterVolumeSpecName: "service-ca") pod "25a9e5e1-e5d5-457d-8c54-8b58dca34985" (UID: "25a9e5e1-e5d5-457d-8c54-8b58dca34985"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:40:27.227444 master-0 kubenswrapper[27819]: I0319 09:40:27.227345 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "25a9e5e1-e5d5-457d-8c54-8b58dca34985" (UID: "25a9e5e1-e5d5-457d-8c54-8b58dca34985"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:40:27.227852 master-0 kubenswrapper[27819]: I0319 09:40:27.227804 27819 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:27.227967 master-0 kubenswrapper[27819]: I0319 09:40:27.227937 27819 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:27.228059 master-0 kubenswrapper[27819]: I0319 09:40:27.227986 27819 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:27.228059 master-0 kubenswrapper[27819]: I0319 09:40:27.227929 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "25a9e5e1-e5d5-457d-8c54-8b58dca34985" (UID: "25a9e5e1-e5d5-457d-8c54-8b58dca34985"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:40:27.229009 master-0 kubenswrapper[27819]: I0319 09:40:27.228988 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "25a9e5e1-e5d5-457d-8c54-8b58dca34985" (UID: "25a9e5e1-e5d5-457d-8c54-8b58dca34985"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:40:27.230167 master-0 kubenswrapper[27819]: I0319 09:40:27.230133 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "25a9e5e1-e5d5-457d-8c54-8b58dca34985" (UID: "25a9e5e1-e5d5-457d-8c54-8b58dca34985"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:40:27.230938 master-0 kubenswrapper[27819]: I0319 09:40:27.230890 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a9e5e1-e5d5-457d-8c54-8b58dca34985-kube-api-access-cplnx" (OuterVolumeSpecName: "kube-api-access-cplnx") pod "25a9e5e1-e5d5-457d-8c54-8b58dca34985" (UID: "25a9e5e1-e5d5-457d-8c54-8b58dca34985"). InnerVolumeSpecName "kube-api-access-cplnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:40:27.329894 master-0 kubenswrapper[27819]: I0319 09:40:27.329819 27819 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:27.329894 master-0 kubenswrapper[27819]: I0319 09:40:27.329873 27819 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/25a9e5e1-e5d5-457d-8c54-8b58dca34985-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:27.330329 master-0 kubenswrapper[27819]: I0319 09:40:27.329888 27819 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/25a9e5e1-e5d5-457d-8c54-8b58dca34985-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:27.330329 master-0 kubenswrapper[27819]: I0319 09:40:27.329935 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cplnx\" (UniqueName: \"kubernetes.io/projected/25a9e5e1-e5d5-457d-8c54-8b58dca34985-kube-api-access-cplnx\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:27.815886 master-0 kubenswrapper[27819]: I0319 09:40:27.815857 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d97df8cb5-9hrl2_25a9e5e1-e5d5-457d-8c54-8b58dca34985/console/0.log" Mar 19 09:40:27.816423 master-0 kubenswrapper[27819]: I0319 09:40:27.816401 27819 generic.go:334] "Generic (PLEG): container finished" podID="25a9e5e1-e5d5-457d-8c54-8b58dca34985" containerID="2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647" exitCode=2 Mar 19 09:40:27.816506 master-0 kubenswrapper[27819]: I0319 09:40:27.816479 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d97df8cb5-9hrl2" Mar 19 09:40:27.816568 master-0 kubenswrapper[27819]: I0319 09:40:27.816484 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d97df8cb5-9hrl2" event={"ID":"25a9e5e1-e5d5-457d-8c54-8b58dca34985","Type":"ContainerDied","Data":"2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647"} Mar 19 09:40:27.816613 master-0 kubenswrapper[27819]: I0319 09:40:27.816589 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d97df8cb5-9hrl2" event={"ID":"25a9e5e1-e5d5-457d-8c54-8b58dca34985","Type":"ContainerDied","Data":"fad7b79a4d5a2917f8de6bde47827afb5ec2350be21021ae503c28b7bbf7724a"} Mar 19 09:40:27.816646 master-0 kubenswrapper[27819]: I0319 09:40:27.816629 27819 scope.go:117] "RemoveContainer" containerID="2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647" Mar 19 09:40:27.835778 master-0 kubenswrapper[27819]: I0319 09:40:27.835717 27819 scope.go:117] "RemoveContainer" containerID="2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647" Mar 19 09:40:27.836204 master-0 kubenswrapper[27819]: E0319 09:40:27.836174 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647\": container with ID starting with 2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647 not found: ID does not exist" containerID="2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647" Mar 19 09:40:27.836278 master-0 kubenswrapper[27819]: I0319 09:40:27.836204 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647"} err="failed to get container status \"2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647\": rpc error: code = NotFound desc = could not find container \"2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647\": container with ID starting with 2336286e4e16ac9de702164e6c976417b71be854b8a08ea0f0a7dd23a2cae647 not found: ID does not exist" Mar 19 09:40:27.839614 master-0 kubenswrapper[27819]: I0319 09:40:27.839569 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d97df8cb5-9hrl2"] Mar 19 09:40:27.844624 master-0 kubenswrapper[27819]: I0319 09:40:27.844567 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d97df8cb5-9hrl2"] Mar 19 09:40:29.319863 master-0 kubenswrapper[27819]: I0319 09:40:29.296679 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a9e5e1-e5d5-457d-8c54-8b58dca34985" path="/var/lib/kubelet/pods/25a9e5e1-e5d5-457d-8c54-8b58dca34985/volumes" Mar 19 09:40:30.279511 master-0 kubenswrapper[27819]: I0319 09:40:30.279422 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:30.292638 master-0 kubenswrapper[27819]: I0319 09:40:30.292589 27819 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="905babca-789e-4f36-8170-5f0582bc8e3f" Mar 19 09:40:30.292638 master-0 kubenswrapper[27819]: I0319 09:40:30.292634 27819 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="905babca-789e-4f36-8170-5f0582bc8e3f" Mar 19 09:40:30.311278 master-0 kubenswrapper[27819]: I0319 09:40:30.311193 27819 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:30.320992 master-0 kubenswrapper[27819]: I0319 09:40:30.320927 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:40:30.327690 master-0 kubenswrapper[27819]: I0319 09:40:30.327645 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:30.329581 master-0 kubenswrapper[27819]: I0319 09:40:30.329496 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:40:30.337248 master-0 kubenswrapper[27819]: I0319 09:40:30.337173 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 19 09:40:30.852374 master-0 kubenswrapper[27819]: I0319 09:40:30.852316 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"01ee49b6be31c6b1924d1c0337571f4b","Type":"ContainerStarted","Data":"114e3a69f3aa5830fdc341b819dcee5a020838e68bd744ea5ca10f41d33a1b92"} Mar 19 09:40:30.852374 master-0 kubenswrapper[27819]: I0319 09:40:30.852379 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"01ee49b6be31c6b1924d1c0337571f4b","Type":"ContainerStarted","Data":"3ca427e8aeb0b5df0a86d91df2a4e0f31d806bbb554ddb9aa2779c396203a09c"} Mar 19 09:40:30.852715 master-0 kubenswrapper[27819]: I0319 09:40:30.852391 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"01ee49b6be31c6b1924d1c0337571f4b","Type":"ContainerStarted","Data":"5d622d5d7023a34b7bba634ddcc7129e2aa538e4a02dbd57a9639b846245ff81"} Mar 19 09:40:31.787831 master-0 kubenswrapper[27819]: I0319 09:40:31.787798 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:40:31.875894 master-0 kubenswrapper[27819]: I0319 09:40:31.875854 27819 generic.go:334] "Generic (PLEG): container finished" podID="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" containerID="b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4" exitCode=0 Mar 19 09:40:31.876179 master-0 kubenswrapper[27819]: I0319 09:40:31.876144 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" event={"ID":"5ae3c935-4beb-4cc9-ba91-d82cac3148dd","Type":"ContainerDied","Data":"b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4"} Mar 19 09:40:31.876266 master-0 kubenswrapper[27819]: I0319 09:40:31.876254 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" event={"ID":"5ae3c935-4beb-4cc9-ba91-d82cac3148dd","Type":"ContainerDied","Data":"2b1a761121f2940d5d19318eb5f9415c23b48668f6c88a3e7a1af25b10ed5fd4"} Mar 19 09:40:31.876336 master-0 kubenswrapper[27819]: I0319 09:40:31.876325 27819 scope.go:117] "RemoveContainer" containerID="b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4" Mar 19 09:40:31.876492 master-0 kubenswrapper[27819]: I0319 09:40:31.876480 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7c64897fc5-qj6vj" Mar 19 09:40:31.889791 master-0 kubenswrapper[27819]: I0319 09:40:31.889740 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log\") pod \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " Mar 19 09:40:31.889951 master-0 kubenswrapper[27819]: I0319 09:40:31.889815 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs\") pod \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " Mar 19 09:40:31.889951 master-0 kubenswrapper[27819]: I0319 09:40:31.889884 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") pod \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " Mar 19 09:40:31.890024 master-0 kubenswrapper[27819]: I0319 09:40:31.889970 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles\") pod \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " Mar 19 09:40:31.890024 master-0 kubenswrapper[27819]: I0319 09:40:31.890008 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") pod \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " Mar 19 09:40:31.890084 master-0 kubenswrapper[27819]: I0319 09:40:31.890052 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") pod \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " Mar 19 09:40:31.890126 master-0 kubenswrapper[27819]: I0319 09:40:31.890088 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4jnj\" (UniqueName: \"kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj\") pod \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\" (UID: \"5ae3c935-4beb-4cc9-ba91-d82cac3148dd\") " Mar 19 09:40:31.891992 master-0 kubenswrapper[27819]: I0319 09:40:31.891825 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"01ee49b6be31c6b1924d1c0337571f4b","Type":"ContainerStarted","Data":"865790f4b4facf52c58e6caec1fe2bb2e12dda2c6de3c2a1839808c8a1858bf6"} Mar 19 09:40:31.891992 master-0 kubenswrapper[27819]: I0319 09:40:31.891882 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"01ee49b6be31c6b1924d1c0337571f4b","Type":"ContainerStarted","Data":"4c52b8a45ab04d058278ccdc107ab30d4a543e85359fa5ddacb709e1f718b15e"} Mar 19 09:40:31.892177 master-0 kubenswrapper[27819]: I0319 09:40:31.892085 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log" (OuterVolumeSpecName: "audit-log") pod "5ae3c935-4beb-4cc9-ba91-d82cac3148dd" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:40:31.892177 master-0 kubenswrapper[27819]: I0319 09:40:31.892103 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "5ae3c935-4beb-4cc9-ba91-d82cac3148dd" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:40:31.892611 master-0 kubenswrapper[27819]: I0319 09:40:31.892452 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "5ae3c935-4beb-4cc9-ba91-d82cac3148dd" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:40:31.896439 master-0 kubenswrapper[27819]: I0319 09:40:31.893215 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj" (OuterVolumeSpecName: "kube-api-access-p4jnj") pod "5ae3c935-4beb-4cc9-ba91-d82cac3148dd" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd"). InnerVolumeSpecName "kube-api-access-p4jnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:40:31.896679 master-0 kubenswrapper[27819]: I0319 09:40:31.896640 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "5ae3c935-4beb-4cc9-ba91-d82cac3148dd" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:40:31.896822 master-0 kubenswrapper[27819]: I0319 09:40:31.896805 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "5ae3c935-4beb-4cc9-ba91-d82cac3148dd" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:40:31.898801 master-0 kubenswrapper[27819]: I0319 09:40:31.898754 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "5ae3c935-4beb-4cc9-ba91-d82cac3148dd" (UID: "5ae3c935-4beb-4cc9-ba91-d82cac3148dd"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:40:31.940883 master-0 kubenswrapper[27819]: I0319 09:40:31.940834 27819 scope.go:117] "RemoveContainer" containerID="b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4" Mar 19 09:40:31.941568 master-0 kubenswrapper[27819]: E0319 09:40:31.941487 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4\": container with ID starting with b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4 not found: ID does not exist" containerID="b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4" Mar 19 09:40:31.941656 master-0 kubenswrapper[27819]: I0319 09:40:31.941592 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4"} err="failed to get container status \"b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4\": rpc error: code = NotFound desc = could not find container \"b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4\": container with ID starting with b5b3df0dbac794bd5e25602842ef5758b157c791c3a418ce44f839aed0b77ef4 not found: ID does not exist" Mar 19 09:40:31.996612 master-0 kubenswrapper[27819]: I0319 09:40:31.996438 27819 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:31.996612 master-0 kubenswrapper[27819]: I0319 09:40:31.996487 27819 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:31.996612 master-0 kubenswrapper[27819]: I0319 09:40:31.996497 27819 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:31.996612 master-0 kubenswrapper[27819]: I0319 09:40:31.996506 27819 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:31.996612 master-0 kubenswrapper[27819]: I0319 09:40:31.996520 27819 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:31.996612 master-0 kubenswrapper[27819]: I0319 09:40:31.996531 27819 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:31.996612 master-0 kubenswrapper[27819]: I0319 09:40:31.996539 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4jnj\" (UniqueName: \"kubernetes.io/projected/5ae3c935-4beb-4cc9-ba91-d82cac3148dd-kube-api-access-p4jnj\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:32.205968 master-0 kubenswrapper[27819]: I0319 09:40:32.205895 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.205874472 podStartE2EDuration="2.205874472s" podCreationTimestamp="2026-03-19 09:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:40:31.914901121 +0000 UTC m=+416.836478823" watchObservedRunningTime="2026-03-19 09:40:32.205874472 +0000 UTC m=+417.127452164" Mar 19 09:40:32.209826 master-0 kubenswrapper[27819]: I0319 09:40:32.209793 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7c64897fc5-qj6vj"] Mar 19 09:40:32.215747 master-0 kubenswrapper[27819]: I0319 09:40:32.215517 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-7c64897fc5-qj6vj"] Mar 19 09:40:33.290535 master-0 kubenswrapper[27819]: I0319 09:40:33.290402 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" path="/var/lib/kubelet/pods/5ae3c935-4beb-4cc9-ba91-d82cac3148dd/volumes" Mar 19 09:40:35.586258 master-0 kubenswrapper[27819]: I0319 09:40:35.586191 27819 scope.go:117] "RemoveContainer" containerID="74b087b8a1f11417cfbc6b3012b38420ffa8a4dbed87e2e5a22cd51bf2974639" Mar 19 09:40:35.608039 master-0 kubenswrapper[27819]: I0319 09:40:35.608003 27819 scope.go:117] "RemoveContainer" containerID="59bfbc24aed025cbeb33d0e5a40c5d7418d9f9aec04c5fd5b96dbe02fab0ba33" Mar 19 09:40:35.626683 master-0 kubenswrapper[27819]: I0319 09:40:35.626623 27819 scope.go:117] "RemoveContainer" containerID="81557cf106fbd5f4a3b2964beaaeaf69341eb9b15abccbc6d3aef5351309e1d6" Mar 19 09:40:40.329959 master-0 kubenswrapper[27819]: I0319 09:40:40.328833 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:40.331882 master-0 kubenswrapper[27819]: I0319 09:40:40.330001 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:40.332018 master-0 kubenswrapper[27819]: I0319 09:40:40.331986 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:40.332088 master-0 kubenswrapper[27819]: I0319 09:40:40.332069 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:40.335437 master-0 kubenswrapper[27819]: I0319 09:40:40.335375 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:40.336092 master-0 kubenswrapper[27819]: I0319 09:40:40.336046 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:40.973207 master-0 kubenswrapper[27819]: I0319 09:40:40.973156 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:40.974675 master-0 kubenswrapper[27819]: I0319 09:40:40.974650 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.722183 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-6-master-0"] Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: E0319 09:40:44.722672 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" containerName="metrics-server" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.722689 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" containerName="metrics-server" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: E0319 09:40:44.722701 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25a9e5e1-e5d5-457d-8c54-8b58dca34985" containerName="console" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.722708 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="25a9e5e1-e5d5-457d-8c54-8b58dca34985" containerName="console" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: E0319 09:40:44.722779 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee88483-d819-42ea-ad81-b4e3f12f226a" containerName="installer" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.722792 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee88483-d819-42ea-ad81-b4e3f12f226a" containerName="installer" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.722975 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="25a9e5e1-e5d5-457d-8c54-8b58dca34985" containerName="console" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.723025 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae3c935-4beb-4cc9-ba91-d82cac3148dd" containerName="metrics-server" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.723052 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee88483-d819-42ea-ad81-b4e3f12f226a" containerName="installer" Mar 19 09:40:44.725190 master-0 kubenswrapper[27819]: I0319 09:40:44.723943 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:44.730401 master-0 kubenswrapper[27819]: I0319 09:40:44.729622 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 19 09:40:44.730401 master-0 kubenswrapper[27819]: I0319 09:40:44.729849 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cnb44" Mar 19 09:40:44.737337 master-0 kubenswrapper[27819]: I0319 09:40:44.737267 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-6-master-0"] Mar 19 09:40:44.811369 master-0 kubenswrapper[27819]: I0319 09:40:44.811296 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d8e2904-2e35-4973-814f-d876aadbc036-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:44.811369 master-0 kubenswrapper[27819]: I0319 09:40:44.811361 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d8e2904-2e35-4973-814f-d876aadbc036-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:44.912197 master-0 kubenswrapper[27819]: I0319 09:40:44.912145 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d8e2904-2e35-4973-814f-d876aadbc036-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:44.913638 master-0 kubenswrapper[27819]: I0319 09:40:44.912586 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d8e2904-2e35-4973-814f-d876aadbc036-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:44.913638 master-0 kubenswrapper[27819]: I0319 09:40:44.912585 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d8e2904-2e35-4973-814f-d876aadbc036-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:44.940259 master-0 kubenswrapper[27819]: I0319 09:40:44.940203 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d8e2904-2e35-4973-814f-d876aadbc036-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:45.047402 master-0 kubenswrapper[27819]: I0319 09:40:45.047293 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:45.428695 master-0 kubenswrapper[27819]: I0319 09:40:45.428637 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-6-master-0"] Mar 19 09:40:45.435804 master-0 kubenswrapper[27819]: W0319 09:40:45.435736 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6d8e2904_2e35_4973_814f_d876aadbc036.slice/crio-efa5b62d2b1e1b7704c92f8d8b7504bbf0bef639a482f546cdcb62f7180052c9 WatchSource:0}: Error finding container efa5b62d2b1e1b7704c92f8d8b7504bbf0bef639a482f546cdcb62f7180052c9: Status 404 returned error can't find the container with id efa5b62d2b1e1b7704c92f8d8b7504bbf0bef639a482f546cdcb62f7180052c9 Mar 19 09:40:46.004417 master-0 kubenswrapper[27819]: I0319 09:40:46.004339 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"6d8e2904-2e35-4973-814f-d876aadbc036","Type":"ContainerStarted","Data":"844d8c894bf32046db8b3d16cc869538cc9163d8a6a351c2b0411714718ce816"} Mar 19 09:40:46.005187 master-0 kubenswrapper[27819]: I0319 09:40:46.005164 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"6d8e2904-2e35-4973-814f-d876aadbc036","Type":"ContainerStarted","Data":"efa5b62d2b1e1b7704c92f8d8b7504bbf0bef639a482f546cdcb62f7180052c9"} Mar 19 09:40:46.025335 master-0 kubenswrapper[27819]: I0319 09:40:46.025241 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" podStartSLOduration=2.025221445 podStartE2EDuration="2.025221445s" podCreationTimestamp="2026-03-19 09:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:40:46.021545615 +0000 UTC m=+430.943123317" watchObservedRunningTime="2026-03-19 09:40:46.025221445 +0000 UTC m=+430.946799137" Mar 19 09:40:47.014223 master-0 kubenswrapper[27819]: I0319 09:40:47.014160 27819 generic.go:334] "Generic (PLEG): container finished" podID="6d8e2904-2e35-4973-814f-d876aadbc036" containerID="844d8c894bf32046db8b3d16cc869538cc9163d8a6a351c2b0411714718ce816" exitCode=0 Mar 19 09:40:47.014223 master-0 kubenswrapper[27819]: I0319 09:40:47.014203 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"6d8e2904-2e35-4973-814f-d876aadbc036","Type":"ContainerDied","Data":"844d8c894bf32046db8b3d16cc869538cc9163d8a6a351c2b0411714718ce816"} Mar 19 09:40:48.282162 master-0 kubenswrapper[27819]: I0319 09:40:48.281993 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:48.364333 master-0 kubenswrapper[27819]: I0319 09:40:48.364235 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d8e2904-2e35-4973-814f-d876aadbc036-kube-api-access\") pod \"6d8e2904-2e35-4973-814f-d876aadbc036\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " Mar 19 09:40:48.364586 master-0 kubenswrapper[27819]: I0319 09:40:48.364514 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d8e2904-2e35-4973-814f-d876aadbc036-kubelet-dir\") pod \"6d8e2904-2e35-4973-814f-d876aadbc036\" (UID: \"6d8e2904-2e35-4973-814f-d876aadbc036\") " Mar 19 09:40:48.364642 master-0 kubenswrapper[27819]: I0319 09:40:48.364609 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d8e2904-2e35-4973-814f-d876aadbc036-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6d8e2904-2e35-4973-814f-d876aadbc036" (UID: "6d8e2904-2e35-4973-814f-d876aadbc036"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:40:48.365981 master-0 kubenswrapper[27819]: I0319 09:40:48.365954 27819 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d8e2904-2e35-4973-814f-d876aadbc036-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:48.367661 master-0 kubenswrapper[27819]: I0319 09:40:48.367515 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8e2904-2e35-4973-814f-d876aadbc036-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6d8e2904-2e35-4973-814f-d876aadbc036" (UID: "6d8e2904-2e35-4973-814f-d876aadbc036"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:40:48.467869 master-0 kubenswrapper[27819]: I0319 09:40:48.467800 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6d8e2904-2e35-4973-814f-d876aadbc036-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 19 09:40:49.043336 master-0 kubenswrapper[27819]: I0319 09:40:49.040360 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"6d8e2904-2e35-4973-814f-d876aadbc036","Type":"ContainerDied","Data":"efa5b62d2b1e1b7704c92f8d8b7504bbf0bef639a482f546cdcb62f7180052c9"} Mar 19 09:40:49.043336 master-0 kubenswrapper[27819]: I0319 09:40:49.040404 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa5b62d2b1e1b7704c92f8d8b7504bbf0bef639a482f546cdcb62f7180052c9" Mar 19 09:40:49.043336 master-0 kubenswrapper[27819]: I0319 09:40:49.040419 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 19 09:40:49.363428 master-0 kubenswrapper[27819]: I0319 09:40:49.363383 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-l6zr5"] Mar 19 09:40:49.364262 master-0 kubenswrapper[27819]: E0319 09:40:49.364245 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8e2904-2e35-4973-814f-d876aadbc036" containerName="pruner" Mar 19 09:40:49.364340 master-0 kubenswrapper[27819]: I0319 09:40:49.364330 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8e2904-2e35-4973-814f-d876aadbc036" containerName="pruner" Mar 19 09:40:49.364563 master-0 kubenswrapper[27819]: I0319 09:40:49.364529 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8e2904-2e35-4973-814f-d876aadbc036" containerName="pruner" Mar 19 09:40:49.365131 master-0 kubenswrapper[27819]: I0319 09:40:49.365114 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.367867 master-0 kubenswrapper[27819]: I0319 09:40:49.367453 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 19 09:40:49.367867 master-0 kubenswrapper[27819]: I0319 09:40:49.367757 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 19 09:40:49.368039 master-0 kubenswrapper[27819]: I0319 09:40:49.367909 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 19 09:40:49.368072 master-0 kubenswrapper[27819]: I0319 09:40:49.368062 27819 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 19 09:40:49.372648 master-0 kubenswrapper[27819]: I0319 09:40:49.372600 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-l6zr5"] Mar 19 09:40:49.481062 master-0 kubenswrapper[27819]: I0319 09:40:49.481005 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c07acf21-e79c-485e-a041-44c2aa7dbecc-os-client-config\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.481253 master-0 kubenswrapper[27819]: I0319 09:40:49.481087 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c07acf21-e79c-485e-a041-44c2aa7dbecc-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.481253 master-0 kubenswrapper[27819]: I0319 09:40:49.481167 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75ncz\" (UniqueName: \"kubernetes.io/projected/c07acf21-e79c-485e-a041-44c2aa7dbecc-kube-api-access-75ncz\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.582854 master-0 kubenswrapper[27819]: I0319 09:40:49.582781 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c07acf21-e79c-485e-a041-44c2aa7dbecc-os-client-config\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.583057 master-0 kubenswrapper[27819]: I0319 09:40:49.582869 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c07acf21-e79c-485e-a041-44c2aa7dbecc-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.583057 master-0 kubenswrapper[27819]: I0319 09:40:49.582931 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75ncz\" (UniqueName: \"kubernetes.io/projected/c07acf21-e79c-485e-a041-44c2aa7dbecc-kube-api-access-75ncz\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.584319 master-0 kubenswrapper[27819]: I0319 09:40:49.584273 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c07acf21-e79c-485e-a041-44c2aa7dbecc-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.586404 master-0 kubenswrapper[27819]: I0319 09:40:49.586353 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c07acf21-e79c-485e-a041-44c2aa7dbecc-os-client-config\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.599735 master-0 kubenswrapper[27819]: I0319 09:40:49.599560 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75ncz\" (UniqueName: \"kubernetes.io/projected/c07acf21-e79c-485e-a041-44c2aa7dbecc-kube-api-access-75ncz\") pod \"sushy-emulator-59477995f9-l6zr5\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:49.687709 master-0 kubenswrapper[27819]: I0319 09:40:49.687524 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:50.095983 master-0 kubenswrapper[27819]: I0319 09:40:50.095929 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-l6zr5"] Mar 19 09:40:50.099096 master-0 kubenswrapper[27819]: W0319 09:40:50.099037 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc07acf21_e79c_485e_a041_44c2aa7dbecc.slice/crio-0724e6b2b2292c153507f7fc633c278193696d798c464c0e2035e41a2c8e28f4 WatchSource:0}: Error finding container 0724e6b2b2292c153507f7fc633c278193696d798c464c0e2035e41a2c8e28f4: Status 404 returned error can't find the container with id 0724e6b2b2292c153507f7fc633c278193696d798c464c0e2035e41a2c8e28f4 Mar 19 09:40:51.052281 master-0 kubenswrapper[27819]: I0319 09:40:51.052227 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" event={"ID":"c07acf21-e79c-485e-a041-44c2aa7dbecc","Type":"ContainerStarted","Data":"0724e6b2b2292c153507f7fc633c278193696d798c464c0e2035e41a2c8e28f4"} Mar 19 09:40:55.831876 master-0 kubenswrapper[27819]: I0319 09:40:55.831799 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 09:40:55.878054 master-0 kubenswrapper[27819]: I0319 09:40:55.877990 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 19 09:40:57.287264 master-0 kubenswrapper[27819]: I0319 09:40:57.287199 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ab0802-da8a-475c-a707-09f7838f580b" path="/var/lib/kubelet/pods/e3ab0802-da8a-475c-a707-09f7838f580b/volumes" Mar 19 09:40:58.107644 master-0 kubenswrapper[27819]: I0319 09:40:58.107590 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" event={"ID":"c07acf21-e79c-485e-a041-44c2aa7dbecc","Type":"ContainerStarted","Data":"de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b"} Mar 19 09:40:58.130493 master-0 kubenswrapper[27819]: I0319 09:40:58.130413 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" podStartSLOduration=2.104098294 podStartE2EDuration="9.130396003s" podCreationTimestamp="2026-03-19 09:40:49 +0000 UTC" firstStartedPulling="2026-03-19 09:40:50.101022945 +0000 UTC m=+435.022600637" lastFinishedPulling="2026-03-19 09:40:57.127320664 +0000 UTC m=+442.048898346" observedRunningTime="2026-03-19 09:40:58.128807239 +0000 UTC m=+443.050384961" watchObservedRunningTime="2026-03-19 09:40:58.130396003 +0000 UTC m=+443.051973695" Mar 19 09:40:59.688208 master-0 kubenswrapper[27819]: I0319 09:40:59.688153 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:59.688735 master-0 kubenswrapper[27819]: I0319 09:40:59.688219 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:40:59.697885 master-0 kubenswrapper[27819]: I0319 09:40:59.697845 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:41:00.125272 master-0 kubenswrapper[27819]: I0319 09:41:00.125200 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:41:19.266886 master-0 kubenswrapper[27819]: I0319 09:41:19.266799 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-cc895596c-wrsxb"] Mar 19 09:41:19.268235 master-0 kubenswrapper[27819]: I0319 09:41:19.268204 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:19.311086 master-0 kubenswrapper[27819]: I0319 09:41:19.311028 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-cc895596c-wrsxb"] Mar 19 09:41:19.448899 master-0 kubenswrapper[27819]: I0319 09:41:19.448833 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwrdd\" (UniqueName: \"kubernetes.io/projected/b218517f-669c-46e4-9425-4adba2d93534-kube-api-access-mwrdd\") pod \"nova-console-poller-cc895596c-wrsxb\" (UID: \"b218517f-669c-46e4-9425-4adba2d93534\") " pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:19.449589 master-0 kubenswrapper[27819]: I0319 09:41:19.449509 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b218517f-669c-46e4-9425-4adba2d93534-os-client-config\") pod \"nova-console-poller-cc895596c-wrsxb\" (UID: \"b218517f-669c-46e4-9425-4adba2d93534\") " pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:19.551866 master-0 kubenswrapper[27819]: I0319 09:41:19.551730 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b218517f-669c-46e4-9425-4adba2d93534-os-client-config\") pod \"nova-console-poller-cc895596c-wrsxb\" (UID: \"b218517f-669c-46e4-9425-4adba2d93534\") " pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:19.551866 master-0 kubenswrapper[27819]: I0319 09:41:19.551846 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwrdd\" (UniqueName: \"kubernetes.io/projected/b218517f-669c-46e4-9425-4adba2d93534-kube-api-access-mwrdd\") pod \"nova-console-poller-cc895596c-wrsxb\" (UID: \"b218517f-669c-46e4-9425-4adba2d93534\") " pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:19.555443 master-0 kubenswrapper[27819]: I0319 09:41:19.555363 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b218517f-669c-46e4-9425-4adba2d93534-os-client-config\") pod \"nova-console-poller-cc895596c-wrsxb\" (UID: \"b218517f-669c-46e4-9425-4adba2d93534\") " pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:19.572405 master-0 kubenswrapper[27819]: I0319 09:41:19.572313 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwrdd\" (UniqueName: \"kubernetes.io/projected/b218517f-669c-46e4-9425-4adba2d93534-kube-api-access-mwrdd\") pod \"nova-console-poller-cc895596c-wrsxb\" (UID: \"b218517f-669c-46e4-9425-4adba2d93534\") " pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:19.584564 master-0 kubenswrapper[27819]: I0319 09:41:19.584482 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" Mar 19 09:41:20.058841 master-0 kubenswrapper[27819]: I0319 09:41:20.058754 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-cc895596c-wrsxb"] Mar 19 09:41:20.062600 master-0 kubenswrapper[27819]: W0319 09:41:20.062522 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb218517f_669c_46e4_9425_4adba2d93534.slice/crio-41572afcca411c40936792302d9855a763949fa725df6eda6474848d03690cb6 WatchSource:0}: Error finding container 41572afcca411c40936792302d9855a763949fa725df6eda6474848d03690cb6: Status 404 returned error can't find the container with id 41572afcca411c40936792302d9855a763949fa725df6eda6474848d03690cb6 Mar 19 09:41:20.277456 master-0 kubenswrapper[27819]: I0319 09:41:20.277367 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" event={"ID":"b218517f-669c-46e4-9425-4adba2d93534","Type":"ContainerStarted","Data":"41572afcca411c40936792302d9855a763949fa725df6eda6474848d03690cb6"} Mar 19 09:41:28.340914 master-0 kubenswrapper[27819]: I0319 09:41:28.340849 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" event={"ID":"b218517f-669c-46e4-9425-4adba2d93534","Type":"ContainerStarted","Data":"9f2da44453f5485f9e740096a55ff66f199b76380d17f287ab0273cdd1d20a20"} Mar 19 09:41:28.340914 master-0 kubenswrapper[27819]: I0319 09:41:28.340897 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" event={"ID":"b218517f-669c-46e4-9425-4adba2d93534","Type":"ContainerStarted","Data":"60ea03fb9909e947392004d2f4828b15cd1776cc342cca0b83405318f9def7d7"} Mar 19 09:41:28.358870 master-0 kubenswrapper[27819]: I0319 09:41:28.358796 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-cc895596c-wrsxb" podStartSLOduration=1.326636043 podStartE2EDuration="9.358778675s" podCreationTimestamp="2026-03-19 09:41:19 +0000 UTC" firstStartedPulling="2026-03-19 09:41:20.064610594 +0000 UTC m=+464.986188286" lastFinishedPulling="2026-03-19 09:41:28.096753226 +0000 UTC m=+473.018330918" observedRunningTime="2026-03-19 09:41:28.356155874 +0000 UTC m=+473.277733576" watchObservedRunningTime="2026-03-19 09:41:28.358778675 +0000 UTC m=+473.280356367" Mar 19 09:41:35.678018 master-0 kubenswrapper[27819]: I0319 09:41:35.677947 27819 scope.go:117] "RemoveContainer" containerID="a1c35003004ca85e3194260594ce7980c9cfead4c46c7a6e5e65ede51128fa87" Mar 19 09:41:53.066982 master-0 kubenswrapper[27819]: I0319 09:41:53.066922 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt"] Mar 19 09:41:53.068189 master-0 kubenswrapper[27819]: I0319 09:41:53.068153 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.078590 master-0 kubenswrapper[27819]: I0319 09:41:53.078501 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt"] Mar 19 09:41:53.148064 master-0 kubenswrapper[27819]: I0319 09:41:53.147990 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b35f54db-f9ba-4f61-921b-b5e20b3615a3-os-client-config\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.148064 master-0 kubenswrapper[27819]: I0319 09:41:53.148061 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/b35f54db-f9ba-4f61-921b-b5e20b3615a3-nova-console-recordings-pv\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.148426 master-0 kubenswrapper[27819]: I0319 09:41:53.148347 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kwt4\" (UniqueName: \"kubernetes.io/projected/b35f54db-f9ba-4f61-921b-b5e20b3615a3-kube-api-access-6kwt4\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.249526 master-0 kubenswrapper[27819]: I0319 09:41:53.249453 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kwt4\" (UniqueName: \"kubernetes.io/projected/b35f54db-f9ba-4f61-921b-b5e20b3615a3-kube-api-access-6kwt4\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.249856 master-0 kubenswrapper[27819]: I0319 09:41:53.249626 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b35f54db-f9ba-4f61-921b-b5e20b3615a3-os-client-config\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.249995 master-0 kubenswrapper[27819]: I0319 09:41:53.249943 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/b35f54db-f9ba-4f61-921b-b5e20b3615a3-nova-console-recordings-pv\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.253370 master-0 kubenswrapper[27819]: I0319 09:41:53.252806 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/b35f54db-f9ba-4f61-921b-b5e20b3615a3-os-client-config\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.267448 master-0 kubenswrapper[27819]: I0319 09:41:53.267388 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kwt4\" (UniqueName: \"kubernetes.io/projected/b35f54db-f9ba-4f61-921b-b5e20b3615a3-kube-api-access-6kwt4\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.886256 master-0 kubenswrapper[27819]: I0319 09:41:53.886191 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/b35f54db-f9ba-4f61-921b-b5e20b3615a3-nova-console-recordings-pv\") pod \"nova-console-recorder-5499bf8b75-nwhpt\" (UID: \"b35f54db-f9ba-4f61-921b-b5e20b3615a3\") " pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:53.982583 master-0 kubenswrapper[27819]: I0319 09:41:53.982501 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" Mar 19 09:41:54.404201 master-0 kubenswrapper[27819]: I0319 09:41:54.404138 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt"] Mar 19 09:41:54.407731 master-0 kubenswrapper[27819]: W0319 09:41:54.407682 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb35f54db_f9ba_4f61_921b_b5e20b3615a3.slice/crio-aad184fe8ca8878cda981f6fcb71b2881d0e09df01e199fcd8a54eab13efeedc WatchSource:0}: Error finding container aad184fe8ca8878cda981f6fcb71b2881d0e09df01e199fcd8a54eab13efeedc: Status 404 returned error can't find the container with id aad184fe8ca8878cda981f6fcb71b2881d0e09df01e199fcd8a54eab13efeedc Mar 19 09:41:54.410022 master-0 kubenswrapper[27819]: I0319 09:41:54.409993 27819 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 09:41:54.531207 master-0 kubenswrapper[27819]: I0319 09:41:54.531111 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" event={"ID":"b35f54db-f9ba-4f61-921b-b5e20b3615a3","Type":"ContainerStarted","Data":"aad184fe8ca8878cda981f6fcb71b2881d0e09df01e199fcd8a54eab13efeedc"} Mar 19 09:42:04.614634 master-0 kubenswrapper[27819]: I0319 09:42:04.614534 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" event={"ID":"b35f54db-f9ba-4f61-921b-b5e20b3615a3","Type":"ContainerStarted","Data":"f564a8a8fe121a3c0c3f7b6af739a97cb02a223e7d99551ee7d38fc877a5389f"} Mar 19 09:42:04.616072 master-0 kubenswrapper[27819]: I0319 09:42:04.614660 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" event={"ID":"b35f54db-f9ba-4f61-921b-b5e20b3615a3","Type":"ContainerStarted","Data":"6219f7035f841d220f6b516ce9eb86b20663a5ef59ea258a1d89f552d67f5ba3"} Mar 19 09:42:04.637958 master-0 kubenswrapper[27819]: I0319 09:42:04.637864 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-5499bf8b75-nwhpt" podStartSLOduration=1.844290609 podStartE2EDuration="11.637827596s" podCreationTimestamp="2026-03-19 09:41:53 +0000 UTC" firstStartedPulling="2026-03-19 09:41:54.409963646 +0000 UTC m=+499.331541338" lastFinishedPulling="2026-03-19 09:42:04.203500603 +0000 UTC m=+509.125078325" observedRunningTime="2026-03-19 09:42:04.635890294 +0000 UTC m=+509.557467996" watchObservedRunningTime="2026-03-19 09:42:04.637827596 +0000 UTC m=+509.559405288" Mar 19 09:42:33.084222 master-0 kubenswrapper[27819]: I0319 09:42:33.084142 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp"] Mar 19 09:42:33.086150 master-0 kubenswrapper[27819]: I0319 09:42:33.086104 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.088072 master-0 kubenswrapper[27819]: I0319 09:42:33.088032 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-nfkbb" Mar 19 09:42:33.098734 master-0 kubenswrapper[27819]: I0319 09:42:33.098675 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp"] Mar 19 09:42:33.190754 master-0 kubenswrapper[27819]: I0319 09:42:33.190706 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9nms\" (UniqueName: \"kubernetes.io/projected/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-kube-api-access-l9nms\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.190989 master-0 kubenswrapper[27819]: I0319 09:42:33.190795 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.190989 master-0 kubenswrapper[27819]: I0319 09:42:33.190818 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.291964 master-0 kubenswrapper[27819]: I0319 09:42:33.291875 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.292310 master-0 kubenswrapper[27819]: I0319 09:42:33.292125 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.292310 master-0 kubenswrapper[27819]: I0319 09:42:33.292272 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9nms\" (UniqueName: \"kubernetes.io/projected/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-kube-api-access-l9nms\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.292572 master-0 kubenswrapper[27819]: I0319 09:42:33.292482 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.292665 master-0 kubenswrapper[27819]: I0319 09:42:33.292586 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.314680 master-0 kubenswrapper[27819]: I0319 09:42:33.314624 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9nms\" (UniqueName: \"kubernetes.io/projected/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-kube-api-access-l9nms\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.408770 master-0 kubenswrapper[27819]: I0319 09:42:33.408683 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:33.851141 master-0 kubenswrapper[27819]: I0319 09:42:33.851054 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp"] Mar 19 09:42:33.856406 master-0 kubenswrapper[27819]: W0319 09:42:33.856370 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8a3cb80_d2cb_4072_9b06_5af473ed22c5.slice/crio-2b9b6475d67963e2a0887da16a6582bcba8b0c5486b07f08389e21e54815f116 WatchSource:0}: Error finding container 2b9b6475d67963e2a0887da16a6582bcba8b0c5486b07f08389e21e54815f116: Status 404 returned error can't find the container with id 2b9b6475d67963e2a0887da16a6582bcba8b0c5486b07f08389e21e54815f116 Mar 19 09:42:34.844101 master-0 kubenswrapper[27819]: I0319 09:42:34.844033 27819 generic.go:334] "Generic (PLEG): container finished" podID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerID="9e892ebfb53d92d33f2ad3379acf2944fecb502945682ee17932eb0b236240de" exitCode=0 Mar 19 09:42:34.844101 master-0 kubenswrapper[27819]: I0319 09:42:34.844082 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" event={"ID":"e8a3cb80-d2cb-4072-9b06-5af473ed22c5","Type":"ContainerDied","Data":"9e892ebfb53d92d33f2ad3379acf2944fecb502945682ee17932eb0b236240de"} Mar 19 09:42:34.844101 master-0 kubenswrapper[27819]: I0319 09:42:34.844113 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" event={"ID":"e8a3cb80-d2cb-4072-9b06-5af473ed22c5","Type":"ContainerStarted","Data":"2b9b6475d67963e2a0887da16a6582bcba8b0c5486b07f08389e21e54815f116"} Mar 19 09:42:36.861762 master-0 kubenswrapper[27819]: I0319 09:42:36.861701 27819 generic.go:334] "Generic (PLEG): container finished" podID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerID="4994bd95259ee3a7e4b8c916d14e23beb840a7ee8d68262c8c54cc4c7675ef62" exitCode=0 Mar 19 09:42:36.861762 master-0 kubenswrapper[27819]: I0319 09:42:36.861754 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" event={"ID":"e8a3cb80-d2cb-4072-9b06-5af473ed22c5","Type":"ContainerDied","Data":"4994bd95259ee3a7e4b8c916d14e23beb840a7ee8d68262c8c54cc4c7675ef62"} Mar 19 09:42:37.872651 master-0 kubenswrapper[27819]: I0319 09:42:37.872607 27819 generic.go:334] "Generic (PLEG): container finished" podID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerID="0891c67ffbe85124e6ebfe2706dcb575843b8c97d98ebe32460694a5cf1bbb3b" exitCode=0 Mar 19 09:42:37.873213 master-0 kubenswrapper[27819]: I0319 09:42:37.872664 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" event={"ID":"e8a3cb80-d2cb-4072-9b06-5af473ed22c5","Type":"ContainerDied","Data":"0891c67ffbe85124e6ebfe2706dcb575843b8c97d98ebe32460694a5cf1bbb3b"} Mar 19 09:42:39.131292 master-0 kubenswrapper[27819]: I0319 09:42:39.131229 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:39.289403 master-0 kubenswrapper[27819]: I0319 09:42:39.289261 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-util\") pod \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " Mar 19 09:42:39.289403 master-0 kubenswrapper[27819]: I0319 09:42:39.289346 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-bundle\") pod \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " Mar 19 09:42:39.289671 master-0 kubenswrapper[27819]: I0319 09:42:39.289417 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9nms\" (UniqueName: \"kubernetes.io/projected/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-kube-api-access-l9nms\") pod \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\" (UID: \"e8a3cb80-d2cb-4072-9b06-5af473ed22c5\") " Mar 19 09:42:39.290467 master-0 kubenswrapper[27819]: I0319 09:42:39.290412 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-bundle" (OuterVolumeSpecName: "bundle") pod "e8a3cb80-d2cb-4072-9b06-5af473ed22c5" (UID: "e8a3cb80-d2cb-4072-9b06-5af473ed22c5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:42:39.293847 master-0 kubenswrapper[27819]: I0319 09:42:39.293804 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-kube-api-access-l9nms" (OuterVolumeSpecName: "kube-api-access-l9nms") pod "e8a3cb80-d2cb-4072-9b06-5af473ed22c5" (UID: "e8a3cb80-d2cb-4072-9b06-5af473ed22c5"). InnerVolumeSpecName "kube-api-access-l9nms". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:42:39.325955 master-0 kubenswrapper[27819]: I0319 09:42:39.325800 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-util" (OuterVolumeSpecName: "util") pod "e8a3cb80-d2cb-4072-9b06-5af473ed22c5" (UID: "e8a3cb80-d2cb-4072-9b06-5af473ed22c5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:42:39.392715 master-0 kubenswrapper[27819]: I0319 09:42:39.392633 27819 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-util\") on node \"master-0\" DevicePath \"\"" Mar 19 09:42:39.392715 master-0 kubenswrapper[27819]: I0319 09:42:39.392681 27819 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:42:39.392715 master-0 kubenswrapper[27819]: I0319 09:42:39.392698 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9nms\" (UniqueName: \"kubernetes.io/projected/e8a3cb80-d2cb-4072-9b06-5af473ed22c5-kube-api-access-l9nms\") on node \"master-0\" DevicePath \"\"" Mar 19 09:42:39.892405 master-0 kubenswrapper[27819]: I0319 09:42:39.892350 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" event={"ID":"e8a3cb80-d2cb-4072-9b06-5af473ed22c5","Type":"ContainerDied","Data":"2b9b6475d67963e2a0887da16a6582bcba8b0c5486b07f08389e21e54815f116"} Mar 19 09:42:39.892405 master-0 kubenswrapper[27819]: I0319 09:42:39.892405 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9b6475d67963e2a0887da16a6582bcba8b0c5486b07f08389e21e54815f116" Mar 19 09:42:39.892405 master-0 kubenswrapper[27819]: I0319 09:42:39.892407 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4xcsqp" Mar 19 09:42:46.318221 master-0 kubenswrapper[27819]: I0319 09:42:46.318144 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-9d9cfd8cb-px4jh"] Mar 19 09:42:46.319121 master-0 kubenswrapper[27819]: E0319 09:42:46.318500 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerName="util" Mar 19 09:42:46.319121 master-0 kubenswrapper[27819]: I0319 09:42:46.318522 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerName="util" Mar 19 09:42:46.319121 master-0 kubenswrapper[27819]: E0319 09:42:46.318568 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerName="pull" Mar 19 09:42:46.319121 master-0 kubenswrapper[27819]: I0319 09:42:46.318579 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerName="pull" Mar 19 09:42:46.319121 master-0 kubenswrapper[27819]: E0319 09:42:46.318597 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerName="extract" Mar 19 09:42:46.319121 master-0 kubenswrapper[27819]: I0319 09:42:46.318607 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerName="extract" Mar 19 09:42:46.319121 master-0 kubenswrapper[27819]: I0319 09:42:46.318804 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a3cb80-d2cb-4072-9b06-5af473ed22c5" containerName="extract" Mar 19 09:42:46.319578 master-0 kubenswrapper[27819]: I0319 09:42:46.319506 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.321295 master-0 kubenswrapper[27819]: I0319 09:42:46.321261 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 19 09:42:46.321377 master-0 kubenswrapper[27819]: I0319 09:42:46.321290 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 19 09:42:46.321629 master-0 kubenswrapper[27819]: I0319 09:42:46.321602 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 19 09:42:46.321992 master-0 kubenswrapper[27819]: I0319 09:42:46.321964 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 19 09:42:46.323533 master-0 kubenswrapper[27819]: I0319 09:42:46.323447 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 19 09:42:46.336859 master-0 kubenswrapper[27819]: I0319 09:42:46.336800 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-9d9cfd8cb-px4jh"] Mar 19 09:42:46.505580 master-0 kubenswrapper[27819]: I0319 09:42:46.505509 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-webhook-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.505781 master-0 kubenswrapper[27819]: I0319 09:42:46.505611 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-metrics-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.505781 master-0 kubenswrapper[27819]: I0319 09:42:46.505643 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-apiservice-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.505781 master-0 kubenswrapper[27819]: I0319 09:42:46.505714 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl46z\" (UniqueName: \"kubernetes.io/projected/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-kube-api-access-hl46z\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.505781 master-0 kubenswrapper[27819]: I0319 09:42:46.505762 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-socket-dir\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.607714 master-0 kubenswrapper[27819]: I0319 09:42:46.607641 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl46z\" (UniqueName: \"kubernetes.io/projected/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-kube-api-access-hl46z\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.607714 master-0 kubenswrapper[27819]: I0319 09:42:46.607700 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-socket-dir\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.608022 master-0 kubenswrapper[27819]: I0319 09:42:46.607771 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-webhook-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.608022 master-0 kubenswrapper[27819]: I0319 09:42:46.607794 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-metrics-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.608022 master-0 kubenswrapper[27819]: I0319 09:42:46.607819 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-apiservice-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.608592 master-0 kubenswrapper[27819]: I0319 09:42:46.608494 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-socket-dir\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.611130 master-0 kubenswrapper[27819]: I0319 09:42:46.611085 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-webhook-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.611130 master-0 kubenswrapper[27819]: I0319 09:42:46.611113 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-apiservice-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.611873 master-0 kubenswrapper[27819]: I0319 09:42:46.611823 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-metrics-cert\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.623065 master-0 kubenswrapper[27819]: I0319 09:42:46.623032 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl46z\" (UniqueName: \"kubernetes.io/projected/ea008692-4d68-4e15-a33b-20ffd8f7aa1a-kube-api-access-hl46z\") pod \"lvms-operator-9d9cfd8cb-px4jh\" (UID: \"ea008692-4d68-4e15-a33b-20ffd8f7aa1a\") " pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:46.633824 master-0 kubenswrapper[27819]: I0319 09:42:46.633797 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:47.033527 master-0 kubenswrapper[27819]: I0319 09:42:47.033476 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-9d9cfd8cb-px4jh"] Mar 19 09:42:47.047479 master-0 kubenswrapper[27819]: W0319 09:42:47.047008 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea008692_4d68_4e15_a33b_20ffd8f7aa1a.slice/crio-90cb21c2331510faf83be041cd381bcfc82785a6c2aa0eedea9c5013b2130cef WatchSource:0}: Error finding container 90cb21c2331510faf83be041cd381bcfc82785a6c2aa0eedea9c5013b2130cef: Status 404 returned error can't find the container with id 90cb21c2331510faf83be041cd381bcfc82785a6c2aa0eedea9c5013b2130cef Mar 19 09:42:47.945416 master-0 kubenswrapper[27819]: I0319 09:42:47.945315 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" event={"ID":"ea008692-4d68-4e15-a33b-20ffd8f7aa1a","Type":"ContainerStarted","Data":"90cb21c2331510faf83be041cd381bcfc82785a6c2aa0eedea9c5013b2130cef"} Mar 19 09:42:52.980778 master-0 kubenswrapper[27819]: I0319 09:42:52.980678 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" event={"ID":"ea008692-4d68-4e15-a33b-20ffd8f7aa1a","Type":"ContainerStarted","Data":"3419a053913d764a93fbf4ad55424b4c6106f35422030e381016c7b4f90d435d"} Mar 19 09:42:52.980778 master-0 kubenswrapper[27819]: I0319 09:42:52.980791 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:53.018872 master-0 kubenswrapper[27819]: I0319 09:42:53.018787 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" podStartSLOduration=1.611283619 podStartE2EDuration="7.018766175s" podCreationTimestamp="2026-03-19 09:42:46 +0000 UTC" firstStartedPulling="2026-03-19 09:42:47.048936427 +0000 UTC m=+551.970514119" lastFinishedPulling="2026-03-19 09:42:52.456418993 +0000 UTC m=+557.377996675" observedRunningTime="2026-03-19 09:42:53.015132817 +0000 UTC m=+557.936710509" watchObservedRunningTime="2026-03-19 09:42:53.018766175 +0000 UTC m=+557.940343867" Mar 19 09:42:53.995025 master-0 kubenswrapper[27819]: I0319 09:42:53.994959 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-9d9cfd8cb-px4jh" Mar 19 09:42:59.252661 master-0 kubenswrapper[27819]: I0319 09:42:59.252608 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb"] Mar 19 09:42:59.255362 master-0 kubenswrapper[27819]: I0319 09:42:59.255336 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.269352 master-0 kubenswrapper[27819]: I0319 09:42:59.269307 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-nfkbb" Mar 19 09:42:59.290354 master-0 kubenswrapper[27819]: I0319 09:42:59.290312 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb"] Mar 19 09:42:59.297421 master-0 kubenswrapper[27819]: I0319 09:42:59.297370 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjbnc\" (UniqueName: \"kubernetes.io/projected/1f7ac405-63a2-48cb-ad69-cd1d648ba468-kube-api-access-hjbnc\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.297765 master-0 kubenswrapper[27819]: I0319 09:42:59.297743 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.298041 master-0 kubenswrapper[27819]: I0319 09:42:59.298023 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.399764 master-0 kubenswrapper[27819]: I0319 09:42:59.399695 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.399975 master-0 kubenswrapper[27819]: I0319 09:42:59.399782 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjbnc\" (UniqueName: \"kubernetes.io/projected/1f7ac405-63a2-48cb-ad69-cd1d648ba468-kube-api-access-hjbnc\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.399975 master-0 kubenswrapper[27819]: I0319 09:42:59.399816 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.400333 master-0 kubenswrapper[27819]: I0319 09:42:59.400294 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.400399 master-0 kubenswrapper[27819]: I0319 09:42:59.400374 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.414860 master-0 kubenswrapper[27819]: I0319 09:42:59.414817 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjbnc\" (UniqueName: \"kubernetes.io/projected/1f7ac405-63a2-48cb-ad69-cd1d648ba468-kube-api-access-hjbnc\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.572309 master-0 kubenswrapper[27819]: I0319 09:42:59.572194 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:42:59.998391 master-0 kubenswrapper[27819]: I0319 09:42:59.998350 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb"] Mar 19 09:43:00.169712 master-0 kubenswrapper[27819]: I0319 09:43:00.169639 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" event={"ID":"1f7ac405-63a2-48cb-ad69-cd1d648ba468","Type":"ContainerStarted","Data":"dce6947910ecbd49a87f3f2a0e0167616ce6897073366559f16687dc0d9a3605"} Mar 19 09:43:00.169712 master-0 kubenswrapper[27819]: I0319 09:43:00.169695 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" event={"ID":"1f7ac405-63a2-48cb-ad69-cd1d648ba468","Type":"ContainerStarted","Data":"e84c5a72e8a9c9c93cd484128b7d3c252ca184867974d8c8243969f9f6b23c21"} Mar 19 09:43:00.248969 master-0 kubenswrapper[27819]: I0319 09:43:00.248855 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z"] Mar 19 09:43:00.250210 master-0 kubenswrapper[27819]: I0319 09:43:00.250177 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.260282 master-0 kubenswrapper[27819]: I0319 09:43:00.260232 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z"] Mar 19 09:43:00.315740 master-0 kubenswrapper[27819]: I0319 09:43:00.315660 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkxt\" (UniqueName: \"kubernetes.io/projected/ced9f429-b996-49d8-a7ea-118f63006d9c-kube-api-access-nwkxt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.315959 master-0 kubenswrapper[27819]: I0319 09:43:00.315778 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.315959 master-0 kubenswrapper[27819]: I0319 09:43:00.315836 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.417353 master-0 kubenswrapper[27819]: I0319 09:43:00.417291 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.417616 master-0 kubenswrapper[27819]: I0319 09:43:00.417382 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.417616 master-0 kubenswrapper[27819]: I0319 09:43:00.417462 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwkxt\" (UniqueName: \"kubernetes.io/projected/ced9f429-b996-49d8-a7ea-118f63006d9c-kube-api-access-nwkxt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.418061 master-0 kubenswrapper[27819]: I0319 09:43:00.418011 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.418496 master-0 kubenswrapper[27819]: I0319 09:43:00.418462 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.439516 master-0 kubenswrapper[27819]: I0319 09:43:00.439480 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwkxt\" (UniqueName: \"kubernetes.io/projected/ced9f429-b996-49d8-a7ea-118f63006d9c-kube-api-access-nwkxt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.591953 master-0 kubenswrapper[27819]: I0319 09:43:00.591796 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:00.902427 master-0 kubenswrapper[27819]: I0319 09:43:00.902384 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z"] Mar 19 09:43:00.903791 master-0 kubenswrapper[27819]: W0319 09:43:00.903736 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podced9f429_b996_49d8_a7ea_118f63006d9c.slice/crio-534503127ccfba0ec1c7c132b02f194cef0df2a2b33d2655c4104e1ddbf3fb76 WatchSource:0}: Error finding container 534503127ccfba0ec1c7c132b02f194cef0df2a2b33d2655c4104e1ddbf3fb76: Status 404 returned error can't find the container with id 534503127ccfba0ec1c7c132b02f194cef0df2a2b33d2655c4104e1ddbf3fb76 Mar 19 09:43:01.183430 master-0 kubenswrapper[27819]: I0319 09:43:01.183263 27819 generic.go:334] "Generic (PLEG): container finished" podID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerID="dce6947910ecbd49a87f3f2a0e0167616ce6897073366559f16687dc0d9a3605" exitCode=0 Mar 19 09:43:01.183430 master-0 kubenswrapper[27819]: I0319 09:43:01.183332 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" event={"ID":"1f7ac405-63a2-48cb-ad69-cd1d648ba468","Type":"ContainerDied","Data":"dce6947910ecbd49a87f3f2a0e0167616ce6897073366559f16687dc0d9a3605"} Mar 19 09:43:01.185949 master-0 kubenswrapper[27819]: I0319 09:43:01.185915 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" event={"ID":"ced9f429-b996-49d8-a7ea-118f63006d9c","Type":"ContainerStarted","Data":"9e57b4d826bbc9954bac5413b1a32942ead9162c7e4e2c309bda965d9e886b07"} Mar 19 09:43:01.185949 master-0 kubenswrapper[27819]: I0319 09:43:01.185948 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" event={"ID":"ced9f429-b996-49d8-a7ea-118f63006d9c","Type":"ContainerStarted","Data":"534503127ccfba0ec1c7c132b02f194cef0df2a2b33d2655c4104e1ddbf3fb76"} Mar 19 09:43:01.246656 master-0 kubenswrapper[27819]: I0319 09:43:01.246588 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65"] Mar 19 09:43:01.265414 master-0 kubenswrapper[27819]: I0319 09:43:01.264096 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.272692 master-0 kubenswrapper[27819]: I0319 09:43:01.271627 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65"] Mar 19 09:43:01.367874 master-0 kubenswrapper[27819]: I0319 09:43:01.367809 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.368139 master-0 kubenswrapper[27819]: I0319 09:43:01.367921 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.368139 master-0 kubenswrapper[27819]: I0319 09:43:01.367983 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2fx\" (UniqueName: \"kubernetes.io/projected/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-kube-api-access-wm2fx\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.469767 master-0 kubenswrapper[27819]: I0319 09:43:01.469605 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.469767 master-0 kubenswrapper[27819]: I0319 09:43:01.469687 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.469767 master-0 kubenswrapper[27819]: I0319 09:43:01.469738 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm2fx\" (UniqueName: \"kubernetes.io/projected/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-kube-api-access-wm2fx\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.470645 master-0 kubenswrapper[27819]: I0319 09:43:01.470628 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.470749 master-0 kubenswrapper[27819]: I0319 09:43:01.470626 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.486914 master-0 kubenswrapper[27819]: I0319 09:43:01.486867 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm2fx\" (UniqueName: \"kubernetes.io/projected/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-kube-api-access-wm2fx\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:01.607208 master-0 kubenswrapper[27819]: I0319 09:43:01.606067 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:02.026778 master-0 kubenswrapper[27819]: I0319 09:43:02.026704 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65"] Mar 19 09:43:02.028654 master-0 kubenswrapper[27819]: W0319 09:43:02.028603 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85a54f7e_dcdf_4dd1_b3a4_6a2c4b360610.slice/crio-6c84d5fd44e0e19e37323760a37a6cc2156d1b20eee5d90ac6844b3137bd67b5 WatchSource:0}: Error finding container 6c84d5fd44e0e19e37323760a37a6cc2156d1b20eee5d90ac6844b3137bd67b5: Status 404 returned error can't find the container with id 6c84d5fd44e0e19e37323760a37a6cc2156d1b20eee5d90ac6844b3137bd67b5 Mar 19 09:43:02.194479 master-0 kubenswrapper[27819]: I0319 09:43:02.194379 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" event={"ID":"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610","Type":"ContainerStarted","Data":"45417d37518e30620a55771b3299373169404a60a5826a6f920f3beb5e09e101"} Mar 19 09:43:02.194479 master-0 kubenswrapper[27819]: I0319 09:43:02.194454 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" event={"ID":"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610","Type":"ContainerStarted","Data":"6c84d5fd44e0e19e37323760a37a6cc2156d1b20eee5d90ac6844b3137bd67b5"} Mar 19 09:43:02.196421 master-0 kubenswrapper[27819]: I0319 09:43:02.196340 27819 generic.go:334] "Generic (PLEG): container finished" podID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerID="9e57b4d826bbc9954bac5413b1a32942ead9162c7e4e2c309bda965d9e886b07" exitCode=0 Mar 19 09:43:02.196421 master-0 kubenswrapper[27819]: I0319 09:43:02.196368 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" event={"ID":"ced9f429-b996-49d8-a7ea-118f63006d9c","Type":"ContainerDied","Data":"9e57b4d826bbc9954bac5413b1a32942ead9162c7e4e2c309bda965d9e886b07"} Mar 19 09:43:03.204416 master-0 kubenswrapper[27819]: I0319 09:43:03.204269 27819 generic.go:334] "Generic (PLEG): container finished" podID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerID="45417d37518e30620a55771b3299373169404a60a5826a6f920f3beb5e09e101" exitCode=0 Mar 19 09:43:03.204416 master-0 kubenswrapper[27819]: I0319 09:43:03.204359 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" event={"ID":"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610","Type":"ContainerDied","Data":"45417d37518e30620a55771b3299373169404a60a5826a6f920f3beb5e09e101"} Mar 19 09:43:05.222415 master-0 kubenswrapper[27819]: I0319 09:43:05.222285 27819 generic.go:334] "Generic (PLEG): container finished" podID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerID="91260419ef1025795300eb12099dd64b0ed9fdda16179ba2061f0e2779ebd1ee" exitCode=0 Mar 19 09:43:05.222415 master-0 kubenswrapper[27819]: I0319 09:43:05.222350 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" event={"ID":"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610","Type":"ContainerDied","Data":"91260419ef1025795300eb12099dd64b0ed9fdda16179ba2061f0e2779ebd1ee"} Mar 19 09:43:05.225495 master-0 kubenswrapper[27819]: I0319 09:43:05.225453 27819 generic.go:334] "Generic (PLEG): container finished" podID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerID="c1cb415366bd42771c918a8888610b615b415d294de86685ab132a50dd4204de" exitCode=0 Mar 19 09:43:05.225621 master-0 kubenswrapper[27819]: I0319 09:43:05.225512 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" event={"ID":"1f7ac405-63a2-48cb-ad69-cd1d648ba468","Type":"ContainerDied","Data":"c1cb415366bd42771c918a8888610b615b415d294de86685ab132a50dd4204de"} Mar 19 09:43:06.236568 master-0 kubenswrapper[27819]: I0319 09:43:06.236447 27819 generic.go:334] "Generic (PLEG): container finished" podID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerID="ead09f756483d683bdd7152c206503a88a1ea65d142c9b5f5f3005cd986c7253" exitCode=0 Mar 19 09:43:06.236568 master-0 kubenswrapper[27819]: I0319 09:43:06.236523 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" event={"ID":"1f7ac405-63a2-48cb-ad69-cd1d648ba468","Type":"ContainerDied","Data":"ead09f756483d683bdd7152c206503a88a1ea65d142c9b5f5f3005cd986c7253"} Mar 19 09:43:06.239310 master-0 kubenswrapper[27819]: I0319 09:43:06.239271 27819 generic.go:334] "Generic (PLEG): container finished" podID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerID="8a7d66778be6cf33fa68559f05a3cd1ebcf6b5f72fe4c690a7cc1bc7723f5ba7" exitCode=0 Mar 19 09:43:06.239310 master-0 kubenswrapper[27819]: I0319 09:43:06.239308 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" event={"ID":"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610","Type":"ContainerDied","Data":"8a7d66778be6cf33fa68559f05a3cd1ebcf6b5f72fe4c690a7cc1bc7723f5ba7"} Mar 19 09:43:07.251973 master-0 kubenswrapper[27819]: I0319 09:43:07.251911 27819 generic.go:334] "Generic (PLEG): container finished" podID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerID="493db5ad104273ee50d653be7d141de98e76ef0dc91b8b278e14b499b29d03bb" exitCode=0 Mar 19 09:43:07.252625 master-0 kubenswrapper[27819]: I0319 09:43:07.252032 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" event={"ID":"ced9f429-b996-49d8-a7ea-118f63006d9c","Type":"ContainerDied","Data":"493db5ad104273ee50d653be7d141de98e76ef0dc91b8b278e14b499b29d03bb"} Mar 19 09:43:07.659223 master-0 kubenswrapper[27819]: I0319 09:43:07.659181 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:43:07.665694 master-0 kubenswrapper[27819]: I0319 09:43:07.664934 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:07.775941 master-0 kubenswrapper[27819]: I0319 09:43:07.775887 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-bundle\") pod \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " Mar 19 09:43:07.776148 master-0 kubenswrapper[27819]: I0319 09:43:07.775952 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm2fx\" (UniqueName: \"kubernetes.io/projected/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-kube-api-access-wm2fx\") pod \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " Mar 19 09:43:07.776148 master-0 kubenswrapper[27819]: I0319 09:43:07.775987 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-bundle\") pod \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " Mar 19 09:43:07.776148 master-0 kubenswrapper[27819]: I0319 09:43:07.776076 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-util\") pod \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " Mar 19 09:43:07.776148 master-0 kubenswrapper[27819]: I0319 09:43:07.776119 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-util\") pod \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\" (UID: \"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610\") " Mar 19 09:43:07.776328 master-0 kubenswrapper[27819]: I0319 09:43:07.776193 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjbnc\" (UniqueName: \"kubernetes.io/projected/1f7ac405-63a2-48cb-ad69-cd1d648ba468-kube-api-access-hjbnc\") pod \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\" (UID: \"1f7ac405-63a2-48cb-ad69-cd1d648ba468\") " Mar 19 09:43:07.776647 master-0 kubenswrapper[27819]: I0319 09:43:07.776602 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-bundle" (OuterVolumeSpecName: "bundle") pod "1f7ac405-63a2-48cb-ad69-cd1d648ba468" (UID: "1f7ac405-63a2-48cb-ad69-cd1d648ba468"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:07.777329 master-0 kubenswrapper[27819]: I0319 09:43:07.777270 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-bundle" (OuterVolumeSpecName: "bundle") pod "85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" (UID: "85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:07.779157 master-0 kubenswrapper[27819]: I0319 09:43:07.779129 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-kube-api-access-wm2fx" (OuterVolumeSpecName: "kube-api-access-wm2fx") pod "85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" (UID: "85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610"). InnerVolumeSpecName "kube-api-access-wm2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:43:07.779262 master-0 kubenswrapper[27819]: I0319 09:43:07.779233 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7ac405-63a2-48cb-ad69-cd1d648ba468-kube-api-access-hjbnc" (OuterVolumeSpecName: "kube-api-access-hjbnc") pod "1f7ac405-63a2-48cb-ad69-cd1d648ba468" (UID: "1f7ac405-63a2-48cb-ad69-cd1d648ba468"). InnerVolumeSpecName "kube-api-access-hjbnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:43:07.786970 master-0 kubenswrapper[27819]: I0319 09:43:07.786900 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-util" (OuterVolumeSpecName: "util") pod "85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" (UID: "85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:07.788056 master-0 kubenswrapper[27819]: I0319 09:43:07.788021 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-util" (OuterVolumeSpecName: "util") pod "1f7ac405-63a2-48cb-ad69-cd1d648ba468" (UID: "1f7ac405-63a2-48cb-ad69-cd1d648ba468"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:07.877121 master-0 kubenswrapper[27819]: I0319 09:43:07.877044 27819 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-util\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:07.877401 master-0 kubenswrapper[27819]: I0319 09:43:07.877079 27819 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-util\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:07.877401 master-0 kubenswrapper[27819]: I0319 09:43:07.877252 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjbnc\" (UniqueName: \"kubernetes.io/projected/1f7ac405-63a2-48cb-ad69-cd1d648ba468-kube-api-access-hjbnc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:07.877401 master-0 kubenswrapper[27819]: I0319 09:43:07.877273 27819 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:07.877401 master-0 kubenswrapper[27819]: I0319 09:43:07.877290 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm2fx\" (UniqueName: \"kubernetes.io/projected/85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610-kube-api-access-wm2fx\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:07.877401 master-0 kubenswrapper[27819]: I0319 09:43:07.877307 27819 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f7ac405-63a2-48cb-ad69-cd1d648ba468-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:08.266075 master-0 kubenswrapper[27819]: I0319 09:43:08.265996 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs"] Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: E0319 09:43:08.266438 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerName="util" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: I0319 09:43:08.266461 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerName="util" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: E0319 09:43:08.266496 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerName="pull" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: I0319 09:43:08.266508 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerName="pull" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: E0319 09:43:08.266767 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerName="extract" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: I0319 09:43:08.266796 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerName="extract" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: E0319 09:43:08.266828 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerName="util" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: I0319 09:43:08.266849 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerName="util" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: E0319 09:43:08.266871 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerName="pull" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: I0319 09:43:08.266889 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerName="pull" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: E0319 09:43:08.266912 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerName="extract" Mar 19 09:43:08.267010 master-0 kubenswrapper[27819]: I0319 09:43:08.266929 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerName="extract" Mar 19 09:43:08.268229 master-0 kubenswrapper[27819]: I0319 09:43:08.267257 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7ac405-63a2-48cb-ad69-cd1d648ba468" containerName="extract" Mar 19 09:43:08.268229 master-0 kubenswrapper[27819]: I0319 09:43:08.267279 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610" containerName="extract" Mar 19 09:43:08.268229 master-0 kubenswrapper[27819]: I0319 09:43:08.268164 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" Mar 19 09:43:08.271868 master-0 kubenswrapper[27819]: I0319 09:43:08.268418 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874mdtsb" event={"ID":"1f7ac405-63a2-48cb-ad69-cd1d648ba468","Type":"ContainerDied","Data":"e84c5a72e8a9c9c93cd484128b7d3c252ca184867974d8c8243969f9f6b23c21"} Mar 19 09:43:08.271868 master-0 kubenswrapper[27819]: I0319 09:43:08.268450 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e84c5a72e8a9c9c93cd484128b7d3c252ca184867974d8c8243969f9f6b23c21" Mar 19 09:43:08.271868 master-0 kubenswrapper[27819]: I0319 09:43:08.269022 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.273521 master-0 kubenswrapper[27819]: I0319 09:43:08.273476 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" Mar 19 09:43:08.273755 master-0 kubenswrapper[27819]: I0319 09:43:08.273685 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1r8j65" event={"ID":"85a54f7e-dcdf-4dd1-b3a4-6a2c4b360610","Type":"ContainerDied","Data":"6c84d5fd44e0e19e37323760a37a6cc2156d1b20eee5d90ac6844b3137bd67b5"} Mar 19 09:43:08.273825 master-0 kubenswrapper[27819]: I0319 09:43:08.273756 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c84d5fd44e0e19e37323760a37a6cc2156d1b20eee5d90ac6844b3137bd67b5" Mar 19 09:43:08.276298 master-0 kubenswrapper[27819]: I0319 09:43:08.276096 27819 generic.go:334] "Generic (PLEG): container finished" podID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerID="3b9bc605f4e6c2386f46d4d783d868f8406722be1bac0132b0c96a3d75fddc53" exitCode=0 Mar 19 09:43:08.276298 master-0 kubenswrapper[27819]: I0319 09:43:08.276129 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" event={"ID":"ced9f429-b996-49d8-a7ea-118f63006d9c","Type":"ContainerDied","Data":"3b9bc605f4e6c2386f46d4d783d868f8406722be1bac0132b0c96a3d75fddc53"} Mar 19 09:43:08.285588 master-0 kubenswrapper[27819]: I0319 09:43:08.280224 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs"] Mar 19 09:43:08.384896 master-0 kubenswrapper[27819]: I0319 09:43:08.384836 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.385103 master-0 kubenswrapper[27819]: I0319 09:43:08.385045 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7rts\" (UniqueName: \"kubernetes.io/projected/2d05da32-afd1-448a-af37-4e2d10a43cb8-kube-api-access-x7rts\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.385103 master-0 kubenswrapper[27819]: I0319 09:43:08.385072 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.486314 master-0 kubenswrapper[27819]: I0319 09:43:08.486241 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.486516 master-0 kubenswrapper[27819]: I0319 09:43:08.486470 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7rts\" (UniqueName: \"kubernetes.io/projected/2d05da32-afd1-448a-af37-4e2d10a43cb8-kube-api-access-x7rts\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.486516 master-0 kubenswrapper[27819]: I0319 09:43:08.486509 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.486902 master-0 kubenswrapper[27819]: I0319 09:43:08.486850 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.486993 master-0 kubenswrapper[27819]: I0319 09:43:08.486952 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.503720 master-0 kubenswrapper[27819]: I0319 09:43:08.503645 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7rts\" (UniqueName: \"kubernetes.io/projected/2d05da32-afd1-448a-af37-4e2d10a43cb8-kube-api-access-x7rts\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:08.587563 master-0 kubenswrapper[27819]: I0319 09:43:08.587398 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:09.001209 master-0 kubenswrapper[27819]: I0319 09:43:09.001143 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs"] Mar 19 09:43:09.005163 master-0 kubenswrapper[27819]: W0319 09:43:09.005128 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d05da32_afd1_448a_af37_4e2d10a43cb8.slice/crio-d8d9639ced8a4e2281bf1cb6f623c135c50686b35f4350bcb44716923ab2ec2b WatchSource:0}: Error finding container d8d9639ced8a4e2281bf1cb6f623c135c50686b35f4350bcb44716923ab2ec2b: Status 404 returned error can't find the container with id d8d9639ced8a4e2281bf1cb6f623c135c50686b35f4350bcb44716923ab2ec2b Mar 19 09:43:09.284674 master-0 kubenswrapper[27819]: I0319 09:43:09.284618 27819 generic.go:334] "Generic (PLEG): container finished" podID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerID="eb147f5d71f4f4e1925613fd6463046238d07f74a5e3ab4c0011414e82476abd" exitCode=0 Mar 19 09:43:09.298791 master-0 kubenswrapper[27819]: I0319 09:43:09.298744 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" event={"ID":"2d05da32-afd1-448a-af37-4e2d10a43cb8","Type":"ContainerDied","Data":"eb147f5d71f4f4e1925613fd6463046238d07f74a5e3ab4c0011414e82476abd"} Mar 19 09:43:09.298888 master-0 kubenswrapper[27819]: I0319 09:43:09.298788 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" event={"ID":"2d05da32-afd1-448a-af37-4e2d10a43cb8","Type":"ContainerStarted","Data":"d8d9639ced8a4e2281bf1cb6f623c135c50686b35f4350bcb44716923ab2ec2b"} Mar 19 09:43:09.640858 master-0 kubenswrapper[27819]: I0319 09:43:09.640806 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:09.705729 master-0 kubenswrapper[27819]: I0319 09:43:09.705606 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-util\") pod \"ced9f429-b996-49d8-a7ea-118f63006d9c\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " Mar 19 09:43:09.706169 master-0 kubenswrapper[27819]: I0319 09:43:09.705818 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-bundle\") pod \"ced9f429-b996-49d8-a7ea-118f63006d9c\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " Mar 19 09:43:09.706169 master-0 kubenswrapper[27819]: I0319 09:43:09.706055 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwkxt\" (UniqueName: \"kubernetes.io/projected/ced9f429-b996-49d8-a7ea-118f63006d9c-kube-api-access-nwkxt\") pod \"ced9f429-b996-49d8-a7ea-118f63006d9c\" (UID: \"ced9f429-b996-49d8-a7ea-118f63006d9c\") " Mar 19 09:43:09.706895 master-0 kubenswrapper[27819]: I0319 09:43:09.706840 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-bundle" (OuterVolumeSpecName: "bundle") pod "ced9f429-b996-49d8-a7ea-118f63006d9c" (UID: "ced9f429-b996-49d8-a7ea-118f63006d9c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:09.707283 master-0 kubenswrapper[27819]: I0319 09:43:09.707238 27819 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:09.709106 master-0 kubenswrapper[27819]: I0319 09:43:09.709050 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced9f429-b996-49d8-a7ea-118f63006d9c-kube-api-access-nwkxt" (OuterVolumeSpecName: "kube-api-access-nwkxt") pod "ced9f429-b996-49d8-a7ea-118f63006d9c" (UID: "ced9f429-b996-49d8-a7ea-118f63006d9c"). InnerVolumeSpecName "kube-api-access-nwkxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:43:09.716570 master-0 kubenswrapper[27819]: I0319 09:43:09.716457 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-util" (OuterVolumeSpecName: "util") pod "ced9f429-b996-49d8-a7ea-118f63006d9c" (UID: "ced9f429-b996-49d8-a7ea-118f63006d9c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:09.813525 master-0 kubenswrapper[27819]: I0319 09:43:09.813420 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwkxt\" (UniqueName: \"kubernetes.io/projected/ced9f429-b996-49d8-a7ea-118f63006d9c-kube-api-access-nwkxt\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:09.813525 master-0 kubenswrapper[27819]: I0319 09:43:09.813464 27819 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ced9f429-b996-49d8-a7ea-118f63006d9c-util\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:10.293686 master-0 kubenswrapper[27819]: I0319 09:43:10.293563 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" event={"ID":"ced9f429-b996-49d8-a7ea-118f63006d9c","Type":"ContainerDied","Data":"534503127ccfba0ec1c7c132b02f194cef0df2a2b33d2655c4104e1ddbf3fb76"} Mar 19 09:43:10.293686 master-0 kubenswrapper[27819]: I0319 09:43:10.293617 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534503127ccfba0ec1c7c132b02f194cef0df2a2b33d2655c4104e1ddbf3fb76" Mar 19 09:43:10.293686 master-0 kubenswrapper[27819]: I0319 09:43:10.293648 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wvd7z" Mar 19 09:43:10.441974 master-0 kubenswrapper[27819]: I0319 09:43:10.441883 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2"] Mar 19 09:43:10.442305 master-0 kubenswrapper[27819]: E0319 09:43:10.442279 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerName="pull" Mar 19 09:43:10.442305 master-0 kubenswrapper[27819]: I0319 09:43:10.442301 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerName="pull" Mar 19 09:43:10.442483 master-0 kubenswrapper[27819]: E0319 09:43:10.442326 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerName="util" Mar 19 09:43:10.442483 master-0 kubenswrapper[27819]: I0319 09:43:10.442335 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerName="util" Mar 19 09:43:10.442483 master-0 kubenswrapper[27819]: E0319 09:43:10.442349 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerName="extract" Mar 19 09:43:10.442483 master-0 kubenswrapper[27819]: I0319 09:43:10.442358 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerName="extract" Mar 19 09:43:10.442768 master-0 kubenswrapper[27819]: I0319 09:43:10.442516 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced9f429-b996-49d8-a7ea-118f63006d9c" containerName="extract" Mar 19 09:43:10.443150 master-0 kubenswrapper[27819]: I0319 09:43:10.443116 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" Mar 19 09:43:10.445099 master-0 kubenswrapper[27819]: I0319 09:43:10.445037 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 19 09:43:10.445303 master-0 kubenswrapper[27819]: I0319 09:43:10.445282 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 19 09:43:10.471346 master-0 kubenswrapper[27819]: I0319 09:43:10.471280 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2"] Mar 19 09:43:10.524857 master-0 kubenswrapper[27819]: I0319 09:43:10.524452 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rzc\" (UniqueName: \"kubernetes.io/projected/ae8ca2b9-5b90-4bc2-8246-672842086a60-kube-api-access-77rzc\") pod \"nmstate-operator-796d4cfff4-p7tf2\" (UID: \"ae8ca2b9-5b90-4bc2-8246-672842086a60\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" Mar 19 09:43:10.626590 master-0 kubenswrapper[27819]: I0319 09:43:10.626510 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77rzc\" (UniqueName: \"kubernetes.io/projected/ae8ca2b9-5b90-4bc2-8246-672842086a60-kube-api-access-77rzc\") pod \"nmstate-operator-796d4cfff4-p7tf2\" (UID: \"ae8ca2b9-5b90-4bc2-8246-672842086a60\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" Mar 19 09:43:10.641598 master-0 kubenswrapper[27819]: I0319 09:43:10.641409 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77rzc\" (UniqueName: \"kubernetes.io/projected/ae8ca2b9-5b90-4bc2-8246-672842086a60-kube-api-access-77rzc\") pod \"nmstate-operator-796d4cfff4-p7tf2\" (UID: \"ae8ca2b9-5b90-4bc2-8246-672842086a60\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" Mar 19 09:43:10.771888 master-0 kubenswrapper[27819]: I0319 09:43:10.771843 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" Mar 19 09:43:11.176744 master-0 kubenswrapper[27819]: I0319 09:43:11.176700 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2"] Mar 19 09:43:11.179211 master-0 kubenswrapper[27819]: W0319 09:43:11.179169 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8ca2b9_5b90_4bc2_8246_672842086a60.slice/crio-a826839d369a0cc795d84163626981227ebbee7faf123d2e638569a4c1aaa2a5 WatchSource:0}: Error finding container a826839d369a0cc795d84163626981227ebbee7faf123d2e638569a4c1aaa2a5: Status 404 returned error can't find the container with id a826839d369a0cc795d84163626981227ebbee7faf123d2e638569a4c1aaa2a5 Mar 19 09:43:11.307253 master-0 kubenswrapper[27819]: I0319 09:43:11.307197 27819 generic.go:334] "Generic (PLEG): container finished" podID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerID="df3eaf260b13d1839f33126ada22e0024542c88bee7fb4ce158643ccfb27f5fd" exitCode=0 Mar 19 09:43:11.311028 master-0 kubenswrapper[27819]: I0319 09:43:11.308855 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" event={"ID":"2d05da32-afd1-448a-af37-4e2d10a43cb8","Type":"ContainerDied","Data":"df3eaf260b13d1839f33126ada22e0024542c88bee7fb4ce158643ccfb27f5fd"} Mar 19 09:43:11.311028 master-0 kubenswrapper[27819]: I0319 09:43:11.309679 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" event={"ID":"ae8ca2b9-5b90-4bc2-8246-672842086a60","Type":"ContainerStarted","Data":"a826839d369a0cc795d84163626981227ebbee7faf123d2e638569a4c1aaa2a5"} Mar 19 09:43:12.318373 master-0 kubenswrapper[27819]: I0319 09:43:12.318303 27819 generic.go:334] "Generic (PLEG): container finished" podID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerID="d1d342cf4d4320a9800ba60f32dc355869a04179c59ac36f06be551fd20e2394" exitCode=0 Mar 19 09:43:12.318373 master-0 kubenswrapper[27819]: I0319 09:43:12.318355 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" event={"ID":"2d05da32-afd1-448a-af37-4e2d10a43cb8","Type":"ContainerDied","Data":"d1d342cf4d4320a9800ba60f32dc355869a04179c59ac36f06be551fd20e2394"} Mar 19 09:43:13.852868 master-0 kubenswrapper[27819]: I0319 09:43:13.852792 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:13.877868 master-0 kubenswrapper[27819]: I0319 09:43:13.877810 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-util\") pod \"2d05da32-afd1-448a-af37-4e2d10a43cb8\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " Mar 19 09:43:13.878131 master-0 kubenswrapper[27819]: I0319 09:43:13.877903 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7rts\" (UniqueName: \"kubernetes.io/projected/2d05da32-afd1-448a-af37-4e2d10a43cb8-kube-api-access-x7rts\") pod \"2d05da32-afd1-448a-af37-4e2d10a43cb8\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " Mar 19 09:43:13.878562 master-0 kubenswrapper[27819]: I0319 09:43:13.878522 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-bundle\") pod \"2d05da32-afd1-448a-af37-4e2d10a43cb8\" (UID: \"2d05da32-afd1-448a-af37-4e2d10a43cb8\") " Mar 19 09:43:13.881186 master-0 kubenswrapper[27819]: I0319 09:43:13.881144 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d05da32-afd1-448a-af37-4e2d10a43cb8-kube-api-access-x7rts" (OuterVolumeSpecName: "kube-api-access-x7rts") pod "2d05da32-afd1-448a-af37-4e2d10a43cb8" (UID: "2d05da32-afd1-448a-af37-4e2d10a43cb8"). InnerVolumeSpecName "kube-api-access-x7rts". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:43:13.881416 master-0 kubenswrapper[27819]: I0319 09:43:13.881364 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-bundle" (OuterVolumeSpecName: "bundle") pod "2d05da32-afd1-448a-af37-4e2d10a43cb8" (UID: "2d05da32-afd1-448a-af37-4e2d10a43cb8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:13.900625 master-0 kubenswrapper[27819]: I0319 09:43:13.900419 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-util" (OuterVolumeSpecName: "util") pod "2d05da32-afd1-448a-af37-4e2d10a43cb8" (UID: "2d05da32-afd1-448a-af37-4e2d10a43cb8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:43:13.980275 master-0 kubenswrapper[27819]: I0319 09:43:13.980206 27819 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-util\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:13.980275 master-0 kubenswrapper[27819]: I0319 09:43:13.980246 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7rts\" (UniqueName: \"kubernetes.io/projected/2d05da32-afd1-448a-af37-4e2d10a43cb8-kube-api-access-x7rts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:13.980275 master-0 kubenswrapper[27819]: I0319 09:43:13.980257 27819 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d05da32-afd1-448a-af37-4e2d10a43cb8-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:43:14.334874 master-0 kubenswrapper[27819]: I0319 09:43:14.334816 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" event={"ID":"2d05da32-afd1-448a-af37-4e2d10a43cb8","Type":"ContainerDied","Data":"d8d9639ced8a4e2281bf1cb6f623c135c50686b35f4350bcb44716923ab2ec2b"} Mar 19 09:43:14.334874 master-0 kubenswrapper[27819]: I0319 09:43:14.334867 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d9639ced8a4e2281bf1cb6f623c135c50686b35f4350bcb44716923ab2ec2b" Mar 19 09:43:14.334874 master-0 kubenswrapper[27819]: I0319 09:43:14.334872 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726684zs" Mar 19 09:43:15.347105 master-0 kubenswrapper[27819]: I0319 09:43:15.347059 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" event={"ID":"ae8ca2b9-5b90-4bc2-8246-672842086a60","Type":"ContainerStarted","Data":"ceb42112e65f1fc1ca7bd6162dc78f42accf1a2fdc34abcb34dcc79e30d28cdb"} Mar 19 09:43:15.372018 master-0 kubenswrapper[27819]: I0319 09:43:15.371933 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-p7tf2" podStartSLOduration=1.938577751 podStartE2EDuration="5.371911134s" podCreationTimestamp="2026-03-19 09:43:10 +0000 UTC" firstStartedPulling="2026-03-19 09:43:11.180488321 +0000 UTC m=+576.102066013" lastFinishedPulling="2026-03-19 09:43:14.613821704 +0000 UTC m=+579.535399396" observedRunningTime="2026-03-19 09:43:15.371429731 +0000 UTC m=+580.293007443" watchObservedRunningTime="2026-03-19 09:43:15.371911134 +0000 UTC m=+580.293488836" Mar 19 09:43:20.929366 master-0 kubenswrapper[27819]: I0319 09:43:20.929296 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w"] Mar 19 09:43:20.930111 master-0 kubenswrapper[27819]: E0319 09:43:20.929746 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerName="util" Mar 19 09:43:20.930111 master-0 kubenswrapper[27819]: I0319 09:43:20.929769 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerName="util" Mar 19 09:43:20.930111 master-0 kubenswrapper[27819]: E0319 09:43:20.929800 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerName="pull" Mar 19 09:43:20.930111 master-0 kubenswrapper[27819]: I0319 09:43:20.929812 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerName="pull" Mar 19 09:43:20.930111 master-0 kubenswrapper[27819]: E0319 09:43:20.929845 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerName="extract" Mar 19 09:43:20.930111 master-0 kubenswrapper[27819]: I0319 09:43:20.929859 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerName="extract" Mar 19 09:43:20.930111 master-0 kubenswrapper[27819]: I0319 09:43:20.930049 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d05da32-afd1-448a-af37-4e2d10a43cb8" containerName="extract" Mar 19 09:43:20.930685 master-0 kubenswrapper[27819]: I0319 09:43:20.930662 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:20.932765 master-0 kubenswrapper[27819]: I0319 09:43:20.932707 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 19 09:43:20.932891 master-0 kubenswrapper[27819]: I0319 09:43:20.932868 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 19 09:43:20.945408 master-0 kubenswrapper[27819]: I0319 09:43:20.945355 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w"] Mar 19 09:43:20.985514 master-0 kubenswrapper[27819]: I0319 09:43:20.985446 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdwc\" (UniqueName: \"kubernetes.io/projected/74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5-kube-api-access-2mdwc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-qvw5w\" (UID: \"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:20.985757 master-0 kubenswrapper[27819]: I0319 09:43:20.985611 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-qvw5w\" (UID: \"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:21.086373 master-0 kubenswrapper[27819]: I0319 09:43:21.086310 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdwc\" (UniqueName: \"kubernetes.io/projected/74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5-kube-api-access-2mdwc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-qvw5w\" (UID: \"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:21.086606 master-0 kubenswrapper[27819]: I0319 09:43:21.086438 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-qvw5w\" (UID: \"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:21.087066 master-0 kubenswrapper[27819]: I0319 09:43:21.087039 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-qvw5w\" (UID: \"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:21.102462 master-0 kubenswrapper[27819]: I0319 09:43:21.102415 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdwc\" (UniqueName: \"kubernetes.io/projected/74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5-kube-api-access-2mdwc\") pod \"cert-manager-operator-controller-manager-66c8bdd694-qvw5w\" (UID: \"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:21.248495 master-0 kubenswrapper[27819]: I0319 09:43:21.248371 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" Mar 19 09:43:21.667747 master-0 kubenswrapper[27819]: I0319 09:43:21.667672 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w"] Mar 19 09:43:21.678827 master-0 kubenswrapper[27819]: W0319 09:43:21.678775 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74f7fb4c_eee2_4b9b_b44d_dee9a8d6bda5.slice/crio-5caaa64c08e531d67a87e72e4cafdc6bb24b43abd2a628d68c0fcd81869c467a WatchSource:0}: Error finding container 5caaa64c08e531d67a87e72e4cafdc6bb24b43abd2a628d68c0fcd81869c467a: Status 404 returned error can't find the container with id 5caaa64c08e531d67a87e72e4cafdc6bb24b43abd2a628d68c0fcd81869c467a Mar 19 09:43:22.405727 master-0 kubenswrapper[27819]: I0319 09:43:22.404088 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" event={"ID":"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5","Type":"ContainerStarted","Data":"5caaa64c08e531d67a87e72e4cafdc6bb24b43abd2a628d68c0fcd81869c467a"} Mar 19 09:43:25.426140 master-0 kubenswrapper[27819]: I0319 09:43:25.426093 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" event={"ID":"74f7fb4c-eee2-4b9b-b44d-dee9a8d6bda5","Type":"ContainerStarted","Data":"ef77b0e1ce849dcdd524193f77dec0a7efd289291a527ea37b10ddf49d99b34d"} Mar 19 09:43:25.454889 master-0 kubenswrapper[27819]: I0319 09:43:25.454806 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-qvw5w" podStartSLOduration=1.9027454879999999 podStartE2EDuration="5.454785488s" podCreationTimestamp="2026-03-19 09:43:20 +0000 UTC" firstStartedPulling="2026-03-19 09:43:21.680703898 +0000 UTC m=+586.602281590" lastFinishedPulling="2026-03-19 09:43:25.232743898 +0000 UTC m=+590.154321590" observedRunningTime="2026-03-19 09:43:25.446297438 +0000 UTC m=+590.367875130" watchObservedRunningTime="2026-03-19 09:43:25.454785488 +0000 UTC m=+590.376363190" Mar 19 09:43:30.828783 master-0 kubenswrapper[27819]: I0319 09:43:30.828719 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-cfh4d"] Mar 19 09:43:30.829825 master-0 kubenswrapper[27819]: I0319 09:43:30.829793 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:30.854564 master-0 kubenswrapper[27819]: I0319 09:43:30.851046 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 19 09:43:30.854564 master-0 kubenswrapper[27819]: I0319 09:43:30.851616 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 19 09:43:30.862001 master-0 kubenswrapper[27819]: I0319 09:43:30.861919 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghdm\" (UniqueName: \"kubernetes.io/projected/cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb-kube-api-access-dghdm\") pod \"cert-manager-webhook-6888856db4-cfh4d\" (UID: \"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb\") " pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:30.862218 master-0 kubenswrapper[27819]: I0319 09:43:30.862027 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-cfh4d\" (UID: \"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb\") " pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:30.884584 master-0 kubenswrapper[27819]: I0319 09:43:30.882308 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-cfh4d"] Mar 19 09:43:30.971569 master-0 kubenswrapper[27819]: I0319 09:43:30.964401 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dghdm\" (UniqueName: \"kubernetes.io/projected/cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb-kube-api-access-dghdm\") pod \"cert-manager-webhook-6888856db4-cfh4d\" (UID: \"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb\") " pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:30.971569 master-0 kubenswrapper[27819]: I0319 09:43:30.964478 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-cfh4d\" (UID: \"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb\") " pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:30.994309 master-0 kubenswrapper[27819]: I0319 09:43:30.994257 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-cfh4d\" (UID: \"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb\") " pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:30.999278 master-0 kubenswrapper[27819]: I0319 09:43:30.999231 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dghdm\" (UniqueName: \"kubernetes.io/projected/cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb-kube-api-access-dghdm\") pod \"cert-manager-webhook-6888856db4-cfh4d\" (UID: \"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb\") " pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:31.207289 master-0 kubenswrapper[27819]: I0319 09:43:31.207220 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:31.792250 master-0 kubenswrapper[27819]: I0319 09:43:31.792191 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-cfh4d"] Mar 19 09:43:32.317405 master-0 kubenswrapper[27819]: I0319 09:43:32.317344 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7xt6t"] Mar 19 09:43:32.318507 master-0 kubenswrapper[27819]: I0319 09:43:32.318475 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:32.328961 master-0 kubenswrapper[27819]: I0319 09:43:32.328900 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7xt6t"] Mar 19 09:43:32.390178 master-0 kubenswrapper[27819]: I0319 09:43:32.388779 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6a0521c-c6d9-4422-988b-1e91369a664c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7xt6t\" (UID: \"d6a0521c-c6d9-4422-988b-1e91369a664c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:32.390178 master-0 kubenswrapper[27819]: I0319 09:43:32.389618 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmxz\" (UniqueName: \"kubernetes.io/projected/d6a0521c-c6d9-4422-988b-1e91369a664c-kube-api-access-bhmxz\") pod \"cert-manager-cainjector-5545bd876-7xt6t\" (UID: \"d6a0521c-c6d9-4422-988b-1e91369a664c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:32.487534 master-0 kubenswrapper[27819]: I0319 09:43:32.487486 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" event={"ID":"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb","Type":"ContainerStarted","Data":"3ec686f467fb996b982d2492ceb3044add10d1a166aa5957d5cff961627ee0cd"} Mar 19 09:43:32.493419 master-0 kubenswrapper[27819]: I0319 09:43:32.491670 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6a0521c-c6d9-4422-988b-1e91369a664c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7xt6t\" (UID: \"d6a0521c-c6d9-4422-988b-1e91369a664c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:32.493419 master-0 kubenswrapper[27819]: I0319 09:43:32.491781 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmxz\" (UniqueName: \"kubernetes.io/projected/d6a0521c-c6d9-4422-988b-1e91369a664c-kube-api-access-bhmxz\") pod \"cert-manager-cainjector-5545bd876-7xt6t\" (UID: \"d6a0521c-c6d9-4422-988b-1e91369a664c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:32.506206 master-0 kubenswrapper[27819]: I0319 09:43:32.505922 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmxz\" (UniqueName: \"kubernetes.io/projected/d6a0521c-c6d9-4422-988b-1e91369a664c-kube-api-access-bhmxz\") pod \"cert-manager-cainjector-5545bd876-7xt6t\" (UID: \"d6a0521c-c6d9-4422-988b-1e91369a664c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:32.515395 master-0 kubenswrapper[27819]: I0319 09:43:32.515347 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6a0521c-c6d9-4422-988b-1e91369a664c-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-7xt6t\" (UID: \"d6a0521c-c6d9-4422-988b-1e91369a664c\") " pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:32.638464 master-0 kubenswrapper[27819]: I0319 09:43:32.638392 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" Mar 19 09:43:33.790991 master-0 kubenswrapper[27819]: I0319 09:43:33.790855 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-7xt6t"] Mar 19 09:43:34.526772 master-0 kubenswrapper[27819]: I0319 09:43:34.526645 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" event={"ID":"d6a0521c-c6d9-4422-988b-1e91369a664c","Type":"ContainerStarted","Data":"823686ef49dcf06a328fdba063ceb45bb54a133a0f656ff6cdfd12744ca0f95a"} Mar 19 09:43:35.384031 master-0 kubenswrapper[27819]: I0319 09:43:35.383985 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx"] Mar 19 09:43:35.388648 master-0 kubenswrapper[27819]: I0319 09:43:35.386320 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.394288 master-0 kubenswrapper[27819]: I0319 09:43:35.394253 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 19 09:43:35.394657 master-0 kubenswrapper[27819]: I0319 09:43:35.394643 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 19 09:43:35.394857 master-0 kubenswrapper[27819]: I0319 09:43:35.394843 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 19 09:43:35.395500 master-0 kubenswrapper[27819]: I0319 09:43:35.395486 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 19 09:43:35.438088 master-0 kubenswrapper[27819]: I0319 09:43:35.438055 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx"] Mar 19 09:43:35.554686 master-0 kubenswrapper[27819]: I0319 09:43:35.554643 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-webhook-cert\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.554686 master-0 kubenswrapper[27819]: I0319 09:43:35.554685 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxs2w\" (UniqueName: \"kubernetes.io/projected/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-kube-api-access-qxs2w\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.555012 master-0 kubenswrapper[27819]: I0319 09:43:35.554793 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-apiservice-cert\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.656708 master-0 kubenswrapper[27819]: I0319 09:43:35.656567 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-webhook-cert\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.656708 master-0 kubenswrapper[27819]: I0319 09:43:35.656634 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxs2w\" (UniqueName: \"kubernetes.io/projected/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-kube-api-access-qxs2w\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.656708 master-0 kubenswrapper[27819]: I0319 09:43:35.656689 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-apiservice-cert\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.660259 master-0 kubenswrapper[27819]: I0319 09:43:35.660229 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-apiservice-cert\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.669561 master-0 kubenswrapper[27819]: I0319 09:43:35.665137 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-webhook-cert\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.680111 master-0 kubenswrapper[27819]: I0319 09:43:35.680079 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxs2w\" (UniqueName: \"kubernetes.io/projected/4b713f9a-4bdf-4e79-894b-28a80f4bc8f6-kube-api-access-qxs2w\") pod \"metallb-operator-controller-manager-6b6cc4d969-58kxx\" (UID: \"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6\") " pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.742181 master-0 kubenswrapper[27819]: I0319 09:43:35.740575 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km"] Mar 19 09:43:35.742181 master-0 kubenswrapper[27819]: I0319 09:43:35.741679 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.743590 master-0 kubenswrapper[27819]: I0319 09:43:35.743559 27819 scope.go:117] "RemoveContainer" containerID="f3ffe4fec33c46fff754b84bc96e8c84dff07f2714439153ed5a5e81bfd1df38" Mar 19 09:43:35.752796 master-0 kubenswrapper[27819]: I0319 09:43:35.752751 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 19 09:43:35.752997 master-0 kubenswrapper[27819]: I0319 09:43:35.752969 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 19 09:43:35.777297 master-0 kubenswrapper[27819]: I0319 09:43:35.777130 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km"] Mar 19 09:43:35.786776 master-0 kubenswrapper[27819]: I0319 09:43:35.786731 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:35.860467 master-0 kubenswrapper[27819]: I0319 09:43:35.860378 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65llf\" (UniqueName: \"kubernetes.io/projected/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-kube-api-access-65llf\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.860697 master-0 kubenswrapper[27819]: I0319 09:43:35.860476 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-apiservice-cert\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.860697 master-0 kubenswrapper[27819]: I0319 09:43:35.860533 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-webhook-cert\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.962422 master-0 kubenswrapper[27819]: I0319 09:43:35.962361 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-apiservice-cert\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.962616 master-0 kubenswrapper[27819]: I0319 09:43:35.962470 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-webhook-cert\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.962616 master-0 kubenswrapper[27819]: I0319 09:43:35.962527 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65llf\" (UniqueName: \"kubernetes.io/projected/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-kube-api-access-65llf\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.966332 master-0 kubenswrapper[27819]: I0319 09:43:35.966299 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-webhook-cert\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.974787 master-0 kubenswrapper[27819]: I0319 09:43:35.974443 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-apiservice-cert\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:35.984165 master-0 kubenswrapper[27819]: I0319 09:43:35.983949 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65llf\" (UniqueName: \"kubernetes.io/projected/9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e-kube-api-access-65llf\") pod \"metallb-operator-webhook-server-79f4998fb6-8f7km\" (UID: \"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e\") " pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:36.084574 master-0 kubenswrapper[27819]: I0319 09:43:36.084111 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:36.306452 master-0 kubenswrapper[27819]: I0319 09:43:36.304989 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx"] Mar 19 09:43:36.579626 master-0 kubenswrapper[27819]: I0319 09:43:36.576497 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" event={"ID":"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6","Type":"ContainerStarted","Data":"f0cc8260ebdda1a9fb269ecd5fb240ea2d5156e929bb662078f54ecfc01784db"} Mar 19 09:43:36.651745 master-0 kubenswrapper[27819]: I0319 09:43:36.651040 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km"] Mar 19 09:43:37.617158 master-0 kubenswrapper[27819]: I0319 09:43:37.617073 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" event={"ID":"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e","Type":"ContainerStarted","Data":"b2d8261d133ca7f6ba66674b5c2ce118824b49af65bfc1c82136478010bebc97"} Mar 19 09:43:42.927712 master-0 kubenswrapper[27819]: I0319 09:43:42.926574 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-rg2rp"] Mar 19 09:43:42.928304 master-0 kubenswrapper[27819]: I0319 09:43:42.927764 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:42.941681 master-0 kubenswrapper[27819]: I0319 09:43:42.941636 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-rg2rp"] Mar 19 09:43:43.069436 master-0 kubenswrapper[27819]: I0319 09:43:43.069345 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfccs\" (UniqueName: \"kubernetes.io/projected/bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c-kube-api-access-rfccs\") pod \"cert-manager-545d4d4674-rg2rp\" (UID: \"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c\") " pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:43.069680 master-0 kubenswrapper[27819]: I0319 09:43:43.069534 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c-bound-sa-token\") pod \"cert-manager-545d4d4674-rg2rp\" (UID: \"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c\") " pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:43.172875 master-0 kubenswrapper[27819]: I0319 09:43:43.172813 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfccs\" (UniqueName: \"kubernetes.io/projected/bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c-kube-api-access-rfccs\") pod \"cert-manager-545d4d4674-rg2rp\" (UID: \"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c\") " pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:43.173121 master-0 kubenswrapper[27819]: I0319 09:43:43.172925 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c-bound-sa-token\") pod \"cert-manager-545d4d4674-rg2rp\" (UID: \"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c\") " pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:43.192739 master-0 kubenswrapper[27819]: I0319 09:43:43.192511 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfccs\" (UniqueName: \"kubernetes.io/projected/bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c-kube-api-access-rfccs\") pod \"cert-manager-545d4d4674-rg2rp\" (UID: \"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c\") " pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:43.196294 master-0 kubenswrapper[27819]: I0319 09:43:43.196234 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c-bound-sa-token\") pod \"cert-manager-545d4d4674-rg2rp\" (UID: \"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c\") " pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:43.253188 master-0 kubenswrapper[27819]: I0319 09:43:43.253153 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-rg2rp" Mar 19 09:43:44.505308 master-0 kubenswrapper[27819]: I0319 09:43:44.504786 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-9bx48"] Mar 19 09:43:44.505983 master-0 kubenswrapper[27819]: I0319 09:43:44.505816 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" Mar 19 09:43:44.512315 master-0 kubenswrapper[27819]: I0319 09:43:44.512275 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 19 09:43:44.512707 master-0 kubenswrapper[27819]: I0319 09:43:44.512692 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 19 09:43:44.526016 master-0 kubenswrapper[27819]: I0319 09:43:44.525975 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-9bx48"] Mar 19 09:43:44.560449 master-0 kubenswrapper[27819]: I0319 09:43:44.560410 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ght8l\" (UniqueName: \"kubernetes.io/projected/932b329a-ecfe-4302-a038-897ecb0cba70-kube-api-access-ght8l\") pod \"obo-prometheus-operator-8ff7d675-9bx48\" (UID: \"932b329a-ecfe-4302-a038-897ecb0cba70\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" Mar 19 09:43:44.664811 master-0 kubenswrapper[27819]: I0319 09:43:44.664739 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ght8l\" (UniqueName: \"kubernetes.io/projected/932b329a-ecfe-4302-a038-897ecb0cba70-kube-api-access-ght8l\") pod \"obo-prometheus-operator-8ff7d675-9bx48\" (UID: \"932b329a-ecfe-4302-a038-897ecb0cba70\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" Mar 19 09:43:44.686451 master-0 kubenswrapper[27819]: I0319 09:43:44.686402 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ght8l\" (UniqueName: \"kubernetes.io/projected/932b329a-ecfe-4302-a038-897ecb0cba70-kube-api-access-ght8l\") pod \"obo-prometheus-operator-8ff7d675-9bx48\" (UID: \"932b329a-ecfe-4302-a038-897ecb0cba70\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" Mar 19 09:43:44.861180 master-0 kubenswrapper[27819]: I0319 09:43:44.861109 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" Mar 19 09:43:44.888801 master-0 kubenswrapper[27819]: I0319 09:43:44.888681 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm"] Mar 19 09:43:44.893688 master-0 kubenswrapper[27819]: I0319 09:43:44.890489 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:44.894823 master-0 kubenswrapper[27819]: I0319 09:43:44.894533 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 19 09:43:44.898624 master-0 kubenswrapper[27819]: I0319 09:43:44.896003 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb"] Mar 19 09:43:44.898624 master-0 kubenswrapper[27819]: I0319 09:43:44.896939 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:44.905206 master-0 kubenswrapper[27819]: I0319 09:43:44.904180 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm"] Mar 19 09:43:44.938443 master-0 kubenswrapper[27819]: I0319 09:43:44.937364 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb"] Mar 19 09:43:44.977572 master-0 kubenswrapper[27819]: I0319 09:43:44.975307 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bf657eaf-b269-4c0b-ba67-919617069075-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm\" (UID: \"bf657eaf-b269-4c0b-ba67-919617069075\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:44.977572 master-0 kubenswrapper[27819]: I0319 09:43:44.975493 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bf657eaf-b269-4c0b-ba67-919617069075-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm\" (UID: \"bf657eaf-b269-4c0b-ba67-919617069075\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:44.977572 master-0 kubenswrapper[27819]: I0319 09:43:44.975837 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e83579d5-c5c8-43e8-8e54-c0bf104c1be3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb\" (UID: \"e83579d5-c5c8-43e8-8e54-c0bf104c1be3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:44.977572 master-0 kubenswrapper[27819]: I0319 09:43:44.975951 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e83579d5-c5c8-43e8-8e54-c0bf104c1be3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb\" (UID: \"e83579d5-c5c8-43e8-8e54-c0bf104c1be3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:45.081644 master-0 kubenswrapper[27819]: I0319 09:43:45.079617 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bf657eaf-b269-4c0b-ba67-919617069075-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm\" (UID: \"bf657eaf-b269-4c0b-ba67-919617069075\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:45.081644 master-0 kubenswrapper[27819]: I0319 09:43:45.079770 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e83579d5-c5c8-43e8-8e54-c0bf104c1be3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb\" (UID: \"e83579d5-c5c8-43e8-8e54-c0bf104c1be3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:45.081644 master-0 kubenswrapper[27819]: I0319 09:43:45.079824 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e83579d5-c5c8-43e8-8e54-c0bf104c1be3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb\" (UID: \"e83579d5-c5c8-43e8-8e54-c0bf104c1be3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:45.081644 master-0 kubenswrapper[27819]: I0319 09:43:45.079885 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bf657eaf-b269-4c0b-ba67-919617069075-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm\" (UID: \"bf657eaf-b269-4c0b-ba67-919617069075\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:45.082940 master-0 kubenswrapper[27819]: I0319 09:43:45.082519 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bf657eaf-b269-4c0b-ba67-919617069075-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm\" (UID: \"bf657eaf-b269-4c0b-ba67-919617069075\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:45.087905 master-0 kubenswrapper[27819]: I0319 09:43:45.087870 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bf657eaf-b269-4c0b-ba67-919617069075-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm\" (UID: \"bf657eaf-b269-4c0b-ba67-919617069075\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:45.092081 master-0 kubenswrapper[27819]: I0319 09:43:45.091190 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e83579d5-c5c8-43e8-8e54-c0bf104c1be3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb\" (UID: \"e83579d5-c5c8-43e8-8e54-c0bf104c1be3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:45.092081 master-0 kubenswrapper[27819]: I0319 09:43:45.091619 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e83579d5-c5c8-43e8-8e54-c0bf104c1be3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb\" (UID: \"e83579d5-c5c8-43e8-8e54-c0bf104c1be3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:45.290649 master-0 kubenswrapper[27819]: I0319 09:43:45.282733 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" Mar 19 09:43:45.301590 master-0 kubenswrapper[27819]: I0319 09:43:45.301058 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" Mar 19 09:43:45.401569 master-0 kubenswrapper[27819]: I0319 09:43:45.398398 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-klw2c"] Mar 19 09:43:45.401569 master-0 kubenswrapper[27819]: I0319 09:43:45.399385 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.409139 master-0 kubenswrapper[27819]: I0319 09:43:45.409061 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 19 09:43:45.461557 master-0 kubenswrapper[27819]: I0319 09:43:45.461468 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-klw2c"] Mar 19 09:43:45.489568 master-0 kubenswrapper[27819]: I0319 09:43:45.487388 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxgr\" (UniqueName: \"kubernetes.io/projected/f19bcebf-8072-4c12-a02a-749cfb639f0d-kube-api-access-mnxgr\") pod \"observability-operator-6dd7dd855f-klw2c\" (UID: \"f19bcebf-8072-4c12-a02a-749cfb639f0d\") " pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.489568 master-0 kubenswrapper[27819]: I0319 09:43:45.487454 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f19bcebf-8072-4c12-a02a-749cfb639f0d-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-klw2c\" (UID: \"f19bcebf-8072-4c12-a02a-749cfb639f0d\") " pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.589487 master-0 kubenswrapper[27819]: I0319 09:43:45.588488 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxgr\" (UniqueName: \"kubernetes.io/projected/f19bcebf-8072-4c12-a02a-749cfb639f0d-kube-api-access-mnxgr\") pod \"observability-operator-6dd7dd855f-klw2c\" (UID: \"f19bcebf-8072-4c12-a02a-749cfb639f0d\") " pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.589487 master-0 kubenswrapper[27819]: I0319 09:43:45.588561 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f19bcebf-8072-4c12-a02a-749cfb639f0d-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-klw2c\" (UID: \"f19bcebf-8072-4c12-a02a-749cfb639f0d\") " pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.593310 master-0 kubenswrapper[27819]: I0319 09:43:45.593264 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f19bcebf-8072-4c12-a02a-749cfb639f0d-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-klw2c\" (UID: \"f19bcebf-8072-4c12-a02a-749cfb639f0d\") " pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.623385 master-0 kubenswrapper[27819]: I0319 09:43:45.623331 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxgr\" (UniqueName: \"kubernetes.io/projected/f19bcebf-8072-4c12-a02a-749cfb639f0d-kube-api-access-mnxgr\") pod \"observability-operator-6dd7dd855f-klw2c\" (UID: \"f19bcebf-8072-4c12-a02a-749cfb639f0d\") " pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.754446 master-0 kubenswrapper[27819]: I0319 09:43:45.754382 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:43:45.876051 master-0 kubenswrapper[27819]: I0319 09:43:45.875985 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-666976f6f5-jb96n"] Mar 19 09:43:45.876972 master-0 kubenswrapper[27819]: I0319 09:43:45.876943 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:45.878873 master-0 kubenswrapper[27819]: I0319 09:43:45.878818 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 19 09:43:45.891327 master-0 kubenswrapper[27819]: I0319 09:43:45.891253 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-666976f6f5-jb96n"] Mar 19 09:43:45.913617 master-0 kubenswrapper[27819]: I0319 09:43:45.901249 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b53de1a1-a173-49be-a93a-dd54949c0b27-apiservice-cert\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:45.913617 master-0 kubenswrapper[27819]: I0319 09:43:45.901381 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b53de1a1-a173-49be-a93a-dd54949c0b27-openshift-service-ca\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:45.913617 master-0 kubenswrapper[27819]: I0319 09:43:45.901436 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b53de1a1-a173-49be-a93a-dd54949c0b27-webhook-cert\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:45.913617 master-0 kubenswrapper[27819]: I0319 09:43:45.901462 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcsn\" (UniqueName: \"kubernetes.io/projected/b53de1a1-a173-49be-a93a-dd54949c0b27-kube-api-access-xjcsn\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.002932 master-0 kubenswrapper[27819]: I0319 09:43:46.002832 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b53de1a1-a173-49be-a93a-dd54949c0b27-openshift-service-ca\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.003142 master-0 kubenswrapper[27819]: I0319 09:43:46.002950 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b53de1a1-a173-49be-a93a-dd54949c0b27-webhook-cert\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.003142 master-0 kubenswrapper[27819]: I0319 09:43:46.002978 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcsn\" (UniqueName: \"kubernetes.io/projected/b53de1a1-a173-49be-a93a-dd54949c0b27-kube-api-access-xjcsn\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.003142 master-0 kubenswrapper[27819]: I0319 09:43:46.003036 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b53de1a1-a173-49be-a93a-dd54949c0b27-apiservice-cert\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.004792 master-0 kubenswrapper[27819]: I0319 09:43:46.004760 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b53de1a1-a173-49be-a93a-dd54949c0b27-openshift-service-ca\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.006582 master-0 kubenswrapper[27819]: I0319 09:43:46.006367 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b53de1a1-a173-49be-a93a-dd54949c0b27-apiservice-cert\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.007775 master-0 kubenswrapper[27819]: I0319 09:43:46.007752 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b53de1a1-a173-49be-a93a-dd54949c0b27-webhook-cert\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.026647 master-0 kubenswrapper[27819]: I0319 09:43:46.020098 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcsn\" (UniqueName: \"kubernetes.io/projected/b53de1a1-a173-49be-a93a-dd54949c0b27-kube-api-access-xjcsn\") pod \"perses-operator-666976f6f5-jb96n\" (UID: \"b53de1a1-a173-49be-a93a-dd54949c0b27\") " pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:46.206206 master-0 kubenswrapper[27819]: I0319 09:43:46.206071 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:48.793454 master-0 kubenswrapper[27819]: I0319 09:43:48.790183 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-klw2c"] Mar 19 09:43:48.812779 master-0 kubenswrapper[27819]: I0319 09:43:48.799381 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" event={"ID":"cc6d5c29-67f6-4416-a6d8-b4078e2d2bcb","Type":"ContainerStarted","Data":"b9ee04de96879ec941ff4df82e242fa2231dd83eccb3a2ff16688c58d8fb2f0d"} Mar 19 09:43:48.812779 master-0 kubenswrapper[27819]: I0319 09:43:48.799755 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:48.812779 master-0 kubenswrapper[27819]: I0319 09:43:48.808283 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" event={"ID":"d6a0521c-c6d9-4422-988b-1e91369a664c","Type":"ContainerStarted","Data":"a351d2fad7f78c4d554cc999673918b4d192cc198bfac3ddb6224fcfadb486e9"} Mar 19 09:43:48.812779 master-0 kubenswrapper[27819]: I0319 09:43:48.812479 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" event={"ID":"9dbe2f7c-684e-4e47-b6c1-8cd65e42cf4e","Type":"ContainerStarted","Data":"b4466867fc51252117c87575b76a36a6492d6089c99a4ded049ee38cfae2ce84"} Mar 19 09:43:48.819561 master-0 kubenswrapper[27819]: I0319 09:43:48.813298 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:43:48.819561 master-0 kubenswrapper[27819]: I0319 09:43:48.815236 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" event={"ID":"4b713f9a-4bdf-4e79-894b-28a80f4bc8f6","Type":"ContainerStarted","Data":"2a1dee2009d96d81d2f3e8eabe605f978cd249feba3ea9972f50287271c09a4f"} Mar 19 09:43:48.819561 master-0 kubenswrapper[27819]: I0319 09:43:48.815901 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:43:48.826291 master-0 kubenswrapper[27819]: I0319 09:43:48.823480 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-9bx48"] Mar 19 09:43:48.836193 master-0 kubenswrapper[27819]: I0319 09:43:48.831601 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb"] Mar 19 09:43:48.839867 master-0 kubenswrapper[27819]: I0319 09:43:48.838184 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm"] Mar 19 09:43:48.865420 master-0 kubenswrapper[27819]: I0319 09:43:48.865350 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" podStartSLOduration=2.403389457 podStartE2EDuration="18.865327241s" podCreationTimestamp="2026-03-19 09:43:30 +0000 UTC" firstStartedPulling="2026-03-19 09:43:31.788968989 +0000 UTC m=+596.710546681" lastFinishedPulling="2026-03-19 09:43:48.250906773 +0000 UTC m=+613.172484465" observedRunningTime="2026-03-19 09:43:48.827352146 +0000 UTC m=+613.748929848" watchObservedRunningTime="2026-03-19 09:43:48.865327241 +0000 UTC m=+613.786904933" Mar 19 09:43:48.866357 master-0 kubenswrapper[27819]: I0319 09:43:48.866311 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" podStartSLOduration=2.021394923 podStartE2EDuration="13.866305218s" podCreationTimestamp="2026-03-19 09:43:35 +0000 UTC" firstStartedPulling="2026-03-19 09:43:36.315868033 +0000 UTC m=+601.237445725" lastFinishedPulling="2026-03-19 09:43:48.160778328 +0000 UTC m=+613.082356020" observedRunningTime="2026-03-19 09:43:48.855090765 +0000 UTC m=+613.776668467" watchObservedRunningTime="2026-03-19 09:43:48.866305218 +0000 UTC m=+613.787882910" Mar 19 09:43:48.902386 master-0 kubenswrapper[27819]: I0319 09:43:48.902215 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" podStartSLOduration=2.389811685 podStartE2EDuration="13.902196928s" podCreationTimestamp="2026-03-19 09:43:35 +0000 UTC" firstStartedPulling="2026-03-19 09:43:36.645705873 +0000 UTC m=+601.567283565" lastFinishedPulling="2026-03-19 09:43:48.158091116 +0000 UTC m=+613.079668808" observedRunningTime="2026-03-19 09:43:48.881271282 +0000 UTC m=+613.802848984" watchObservedRunningTime="2026-03-19 09:43:48.902196928 +0000 UTC m=+613.823774620" Mar 19 09:43:48.942360 master-0 kubenswrapper[27819]: I0319 09:43:48.936780 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-7xt6t" podStartSLOduration=2.610452533 podStartE2EDuration="16.936756092s" podCreationTimestamp="2026-03-19 09:43:32 +0000 UTC" firstStartedPulling="2026-03-19 09:43:33.823005299 +0000 UTC m=+598.744582991" lastFinishedPulling="2026-03-19 09:43:48.149308858 +0000 UTC m=+613.070886550" observedRunningTime="2026-03-19 09:43:48.904925082 +0000 UTC m=+613.826502784" watchObservedRunningTime="2026-03-19 09:43:48.936756092 +0000 UTC m=+613.858333784" Mar 19 09:43:49.044281 master-0 kubenswrapper[27819]: I0319 09:43:49.044239 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-rg2rp"] Mar 19 09:43:49.057598 master-0 kubenswrapper[27819]: I0319 09:43:49.057343 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-666976f6f5-jb96n"] Mar 19 09:43:49.822607 master-0 kubenswrapper[27819]: I0319 09:43:49.822530 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" event={"ID":"bf657eaf-b269-4c0b-ba67-919617069075","Type":"ContainerStarted","Data":"2d1a7d74569d28a4c774686d70d8788d970fdf380c77c3753ba5a9a138154083"} Mar 19 09:43:49.823841 master-0 kubenswrapper[27819]: I0319 09:43:49.823785 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-rg2rp" event={"ID":"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c","Type":"ContainerStarted","Data":"704ab87b60ff05997375581980ca208e229afd232d9a615ab0a853f45807bafc"} Mar 19 09:43:49.823841 master-0 kubenswrapper[27819]: I0319 09:43:49.823837 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-rg2rp" event={"ID":"bee3ba6c-1c7d-40bd-be6f-3e8e56fe9e4c","Type":"ContainerStarted","Data":"87d04431ebe35e68205530ed79ae890735c01b4e77a74990bb5a5841ab3a9047"} Mar 19 09:43:49.825161 master-0 kubenswrapper[27819]: I0319 09:43:49.825120 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" event={"ID":"e83579d5-c5c8-43e8-8e54-c0bf104c1be3","Type":"ContainerStarted","Data":"8a6a7528f8afc463b25c0595656b7536ab7c18a30de2c484dd7990e0f4398cef"} Mar 19 09:43:49.826347 master-0 kubenswrapper[27819]: I0319 09:43:49.826313 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" event={"ID":"932b329a-ecfe-4302-a038-897ecb0cba70","Type":"ContainerStarted","Data":"8961b549f3eca4319f87fd60054bb426499a71f8635c1892dde11c1c45fdda74"} Mar 19 09:43:49.827640 master-0 kubenswrapper[27819]: I0319 09:43:49.827534 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" event={"ID":"f19bcebf-8072-4c12-a02a-749cfb639f0d","Type":"ContainerStarted","Data":"a8a6c1d7a6857e05e59438c1b4fa77c552ddd8eefa2a85811480ff2c5cb0f4cc"} Mar 19 09:43:49.829261 master-0 kubenswrapper[27819]: I0319 09:43:49.829226 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-666976f6f5-jb96n" event={"ID":"b53de1a1-a173-49be-a93a-dd54949c0b27","Type":"ContainerStarted","Data":"2bc05ad9eeb0e6b58b5917161f019d67a030fa186f9720a0977eb7db4fe3f65e"} Mar 19 09:43:49.846561 master-0 kubenswrapper[27819]: I0319 09:43:49.846462 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-rg2rp" podStartSLOduration=7.846445667 podStartE2EDuration="7.846445667s" podCreationTimestamp="2026-03-19 09:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:43:49.845193893 +0000 UTC m=+614.766771585" watchObservedRunningTime="2026-03-19 09:43:49.846445667 +0000 UTC m=+614.768023359" Mar 19 09:43:56.223863 master-0 kubenswrapper[27819]: I0319 09:43:56.221618 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-cfh4d" Mar 19 09:43:56.909885 master-0 kubenswrapper[27819]: I0319 09:43:56.909835 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" event={"ID":"e83579d5-c5c8-43e8-8e54-c0bf104c1be3","Type":"ContainerStarted","Data":"574f374d9600c9cf9b210436d81f7b75eb2e5722693b163c0f3d5b2c69fcec88"} Mar 19 09:43:56.914059 master-0 kubenswrapper[27819]: I0319 09:43:56.914029 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" event={"ID":"932b329a-ecfe-4302-a038-897ecb0cba70","Type":"ContainerStarted","Data":"4ae75ac8ce6f2b455570e7293bf8f7d4346c6bca678ee8234a8196570e354a9d"} Mar 19 09:43:56.931974 master-0 kubenswrapper[27819]: I0319 09:43:56.921248 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-666976f6f5-jb96n" event={"ID":"b53de1a1-a173-49be-a93a-dd54949c0b27","Type":"ContainerStarted","Data":"5775319b9976a18b99b6393b288d94e4e1d71794c8e1039c9e9f624524f9ea9e"} Mar 19 09:43:56.931974 master-0 kubenswrapper[27819]: I0319 09:43:56.921968 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:43:56.931974 master-0 kubenswrapper[27819]: I0319 09:43:56.924110 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" event={"ID":"bf657eaf-b269-4c0b-ba67-919617069075","Type":"ContainerStarted","Data":"ea274c01d3117d1876fa936e1c4c62d461268e7e6175ab0a521c57bf0f569e8a"} Mar 19 09:43:56.989003 master-0 kubenswrapper[27819]: I0319 09:43:56.986252 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-xjtrb" podStartSLOduration=6.055087104 podStartE2EDuration="12.986233471s" podCreationTimestamp="2026-03-19 09:43:44 +0000 UTC" firstStartedPulling="2026-03-19 09:43:48.836566285 +0000 UTC m=+613.758143977" lastFinishedPulling="2026-03-19 09:43:55.767712652 +0000 UTC m=+620.689290344" observedRunningTime="2026-03-19 09:43:56.93656858 +0000 UTC m=+621.858146292" watchObservedRunningTime="2026-03-19 09:43:56.986233471 +0000 UTC m=+621.907811163" Mar 19 09:43:56.995671 master-0 kubenswrapper[27819]: I0319 09:43:56.995104 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f6f6b7fd6-9smjm" podStartSLOduration=6.054475447 podStartE2EDuration="12.995080441s" podCreationTimestamp="2026-03-19 09:43:44 +0000 UTC" firstStartedPulling="2026-03-19 09:43:48.861728704 +0000 UTC m=+613.783306396" lastFinishedPulling="2026-03-19 09:43:55.802333698 +0000 UTC m=+620.723911390" observedRunningTime="2026-03-19 09:43:56.978836682 +0000 UTC m=+621.900414394" watchObservedRunningTime="2026-03-19 09:43:56.995080441 +0000 UTC m=+621.916658133" Mar 19 09:43:57.036241 master-0 kubenswrapper[27819]: I0319 09:43:57.035999 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-666976f6f5-jb96n" podStartSLOduration=5.286822354 podStartE2EDuration="12.035975685s" podCreationTimestamp="2026-03-19 09:43:45 +0000 UTC" firstStartedPulling="2026-03-19 09:43:49.051881082 +0000 UTC m=+613.973458774" lastFinishedPulling="2026-03-19 09:43:55.801034413 +0000 UTC m=+620.722612105" observedRunningTime="2026-03-19 09:43:57.02543124 +0000 UTC m=+621.947008952" watchObservedRunningTime="2026-03-19 09:43:57.035975685 +0000 UTC m=+621.957553387" Mar 19 09:43:57.071426 master-0 kubenswrapper[27819]: I0319 09:43:57.071364 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-9bx48" podStartSLOduration=6.115124686 podStartE2EDuration="13.071347221s" podCreationTimestamp="2026-03-19 09:43:44 +0000 UTC" firstStartedPulling="2026-03-19 09:43:48.831346084 +0000 UTC m=+613.752923776" lastFinishedPulling="2026-03-19 09:43:55.787568619 +0000 UTC m=+620.709146311" observedRunningTime="2026-03-19 09:43:57.069611764 +0000 UTC m=+621.991189466" watchObservedRunningTime="2026-03-19 09:43:57.071347221 +0000 UTC m=+621.992924913" Mar 19 09:44:00.956503 master-0 kubenswrapper[27819]: I0319 09:44:00.956442 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" event={"ID":"f19bcebf-8072-4c12-a02a-749cfb639f0d","Type":"ContainerStarted","Data":"c935d25ad8502991859d804d5f97eb11280dd459168167ee8d00719af3af9e43"} Mar 19 09:44:00.957113 master-0 kubenswrapper[27819]: I0319 09:44:00.956775 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:44:01.057838 master-0 kubenswrapper[27819]: I0319 09:44:01.057781 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" Mar 19 09:44:01.362719 master-0 kubenswrapper[27819]: I0319 09:44:01.362635 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-klw2c" podStartSLOduration=4.91316824 podStartE2EDuration="16.362613902s" podCreationTimestamp="2026-03-19 09:43:45 +0000 UTC" firstStartedPulling="2026-03-19 09:43:48.799264837 +0000 UTC m=+613.720842529" lastFinishedPulling="2026-03-19 09:44:00.248710499 +0000 UTC m=+625.170288191" observedRunningTime="2026-03-19 09:44:01.357603226 +0000 UTC m=+626.279180928" watchObservedRunningTime="2026-03-19 09:44:01.362613902 +0000 UTC m=+626.284191604" Mar 19 09:44:06.088402 master-0 kubenswrapper[27819]: I0319 09:44:06.088332 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79f4998fb6-8f7km" Mar 19 09:44:06.208771 master-0 kubenswrapper[27819]: I0319 09:44:06.208712 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-666976f6f5-jb96n" Mar 19 09:44:25.791121 master-0 kubenswrapper[27819]: I0319 09:44:25.791050 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b6cc4d969-58kxx" Mar 19 09:44:33.743622 master-0 kubenswrapper[27819]: I0319 09:44:33.743353 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq"] Mar 19 09:44:33.750559 master-0 kubenswrapper[27819]: I0319 09:44:33.749040 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:33.757556 master-0 kubenswrapper[27819]: I0319 09:44:33.755643 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-j4z46"] Mar 19 09:44:33.761558 master-0 kubenswrapper[27819]: I0319 09:44:33.759131 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.761558 master-0 kubenswrapper[27819]: I0319 09:44:33.760324 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq"] Mar 19 09:44:33.793560 master-0 kubenswrapper[27819]: I0319 09:44:33.792960 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 19 09:44:33.793560 master-0 kubenswrapper[27819]: I0319 09:44:33.793169 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 19 09:44:33.793560 master-0 kubenswrapper[27819]: I0319 09:44:33.793298 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 19 09:44:33.893225 master-0 kubenswrapper[27819]: I0319 09:44:33.893020 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-5frqz"] Mar 19 09:44:33.894759 master-0 kubenswrapper[27819]: I0319 09:44:33.894730 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5frqz" Mar 19 09:44:33.897891 master-0 kubenswrapper[27819]: I0319 09:44:33.897864 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 19 09:44:33.898190 master-0 kubenswrapper[27819]: I0319 09:44:33.898175 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 19 09:44:33.898456 master-0 kubenswrapper[27819]: I0319 09:44:33.898443 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 19 09:44:33.899611 master-0 kubenswrapper[27819]: I0319 09:44:33.899592 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-reloader\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.899729 master-0 kubenswrapper[27819]: I0319 09:44:33.899713 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-sockets\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.899978 master-0 kubenswrapper[27819]: I0319 09:44:33.899961 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b48jd\" (UniqueName: \"kubernetes.io/projected/300b82db-40bd-4206-bb4e-11db3ac40342-kube-api-access-b48jd\") pod \"frr-k8s-webhook-server-bcc4b6f68-977gq\" (UID: \"300b82db-40bd-4206-bb4e-11db3ac40342\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:33.900084 master-0 kubenswrapper[27819]: I0319 09:44:33.900072 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-conf\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.900157 master-0 kubenswrapper[27819]: I0319 09:44:33.900145 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-metrics\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.900238 master-0 kubenswrapper[27819]: I0319 09:44:33.900226 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d960bf6-0166-4e00-8ba8-ccc61b696d46-metrics-certs\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.900328 master-0 kubenswrapper[27819]: I0319 09:44:33.900315 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/300b82db-40bd-4206-bb4e-11db3ac40342-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-977gq\" (UID: \"300b82db-40bd-4206-bb4e-11db3ac40342\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:33.900463 master-0 kubenswrapper[27819]: I0319 09:44:33.900416 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-startup\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.900556 master-0 kubenswrapper[27819]: I0319 09:44:33.900530 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzwmc\" (UniqueName: \"kubernetes.io/projected/8d960bf6-0166-4e00-8ba8-ccc61b696d46-kube-api-access-xzwmc\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:33.913495 master-0 kubenswrapper[27819]: I0319 09:44:33.913404 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-mpwcm"] Mar 19 09:44:33.915097 master-0 kubenswrapper[27819]: I0319 09:44:33.915062 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:33.916983 master-0 kubenswrapper[27819]: I0319 09:44:33.916943 27819 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 19 09:44:33.931694 master-0 kubenswrapper[27819]: I0319 09:44:33.931615 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-mpwcm"] Mar 19 09:44:34.001726 master-0 kubenswrapper[27819]: I0319 09:44:34.001586 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b48jd\" (UniqueName: \"kubernetes.io/projected/300b82db-40bd-4206-bb4e-11db3ac40342-kube-api-access-b48jd\") pod \"frr-k8s-webhook-server-bcc4b6f68-977gq\" (UID: \"300b82db-40bd-4206-bb4e-11db3ac40342\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:34.001726 master-0 kubenswrapper[27819]: I0319 09:44:34.001682 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f2f18a5-b624-43ce-86a9-001388cb1f54-cert\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.001726 master-0 kubenswrapper[27819]: I0319 09:44:34.001706 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-conf\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.001726 master-0 kubenswrapper[27819]: I0319 09:44:34.001722 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-metrics\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001748 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d960bf6-0166-4e00-8ba8-ccc61b696d46-metrics-certs\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001771 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-metrics-certs\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001799 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/300b82db-40bd-4206-bb4e-11db3ac40342-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-977gq\" (UID: \"300b82db-40bd-4206-bb4e-11db3ac40342\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001836 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f2f18a5-b624-43ce-86a9-001388cb1f54-metrics-certs\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001856 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qst\" (UniqueName: \"kubernetes.io/projected/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-kube-api-access-84qst\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001872 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-startup\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001900 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001918 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-metallb-excludel2\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001935 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzwmc\" (UniqueName: \"kubernetes.io/projected/8d960bf6-0166-4e00-8ba8-ccc61b696d46-kube-api-access-xzwmc\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001963 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5md\" (UniqueName: \"kubernetes.io/projected/6f2f18a5-b624-43ce-86a9-001388cb1f54-kube-api-access-7c5md\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.001988 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-reloader\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002022 master-0 kubenswrapper[27819]: I0319 09:44:34.002011 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-sockets\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002369 master-0 kubenswrapper[27819]: I0319 09:44:34.002328 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-conf\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002819 master-0 kubenswrapper[27819]: I0319 09:44:34.002682 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-metrics\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.002819 master-0 kubenswrapper[27819]: I0319 09:44:34.002807 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-reloader\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.003000 master-0 kubenswrapper[27819]: I0319 09:44:34.002979 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-sockets\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.003490 master-0 kubenswrapper[27819]: I0319 09:44:34.003465 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8d960bf6-0166-4e00-8ba8-ccc61b696d46-frr-startup\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.005973 master-0 kubenswrapper[27819]: I0319 09:44:34.005938 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/300b82db-40bd-4206-bb4e-11db3ac40342-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-977gq\" (UID: \"300b82db-40bd-4206-bb4e-11db3ac40342\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:34.006261 master-0 kubenswrapper[27819]: I0319 09:44:34.006237 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d960bf6-0166-4e00-8ba8-ccc61b696d46-metrics-certs\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.017496 master-0 kubenswrapper[27819]: I0319 09:44:34.017442 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b48jd\" (UniqueName: \"kubernetes.io/projected/300b82db-40bd-4206-bb4e-11db3ac40342-kube-api-access-b48jd\") pod \"frr-k8s-webhook-server-bcc4b6f68-977gq\" (UID: \"300b82db-40bd-4206-bb4e-11db3ac40342\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:34.021087 master-0 kubenswrapper[27819]: I0319 09:44:34.021070 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzwmc\" (UniqueName: \"kubernetes.io/projected/8d960bf6-0166-4e00-8ba8-ccc61b696d46-kube-api-access-xzwmc\") pod \"frr-k8s-j4z46\" (UID: \"8d960bf6-0166-4e00-8ba8-ccc61b696d46\") " pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.104585 master-0 kubenswrapper[27819]: I0319 09:44:34.104498 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f2f18a5-b624-43ce-86a9-001388cb1f54-cert\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.104831 master-0 kubenswrapper[27819]: I0319 09:44:34.104615 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-metrics-certs\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.104831 master-0 kubenswrapper[27819]: I0319 09:44:34.104696 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f2f18a5-b624-43ce-86a9-001388cb1f54-metrics-certs\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.104831 master-0 kubenswrapper[27819]: I0319 09:44:34.104724 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qst\" (UniqueName: \"kubernetes.io/projected/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-kube-api-access-84qst\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.104831 master-0 kubenswrapper[27819]: I0319 09:44:34.104747 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.104831 master-0 kubenswrapper[27819]: I0319 09:44:34.104765 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-metallb-excludel2\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.104831 master-0 kubenswrapper[27819]: I0319 09:44:34.104805 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c5md\" (UniqueName: \"kubernetes.io/projected/6f2f18a5-b624-43ce-86a9-001388cb1f54-kube-api-access-7c5md\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.105815 master-0 kubenswrapper[27819]: E0319 09:44:34.105430 27819 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 19 09:44:34.105815 master-0 kubenswrapper[27819]: E0319 09:44:34.105603 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist podName:9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:44:34.605572314 +0000 UTC m=+659.527150006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist") pod "speaker-5frqz" (UID: "9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7") : secret "metallb-memberlist" not found Mar 19 09:44:34.105815 master-0 kubenswrapper[27819]: I0319 09:44:34.105757 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-metallb-excludel2\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.109249 master-0 kubenswrapper[27819]: I0319 09:44:34.109213 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-metrics-certs\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.109404 master-0 kubenswrapper[27819]: I0319 09:44:34.109379 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6f2f18a5-b624-43ce-86a9-001388cb1f54-cert\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.110188 master-0 kubenswrapper[27819]: I0319 09:44:34.109796 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6f2f18a5-b624-43ce-86a9-001388cb1f54-metrics-certs\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.122918 master-0 kubenswrapper[27819]: I0319 09:44:34.122742 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qst\" (UniqueName: \"kubernetes.io/projected/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-kube-api-access-84qst\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.123101 master-0 kubenswrapper[27819]: I0319 09:44:34.122957 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c5md\" (UniqueName: \"kubernetes.io/projected/6f2f18a5-b624-43ce-86a9-001388cb1f54-kube-api-access-7c5md\") pod \"controller-7bb4cc7c98-mpwcm\" (UID: \"6f2f18a5-b624-43ce-86a9-001388cb1f54\") " pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.132033 master-0 kubenswrapper[27819]: I0319 09:44:34.131983 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:34.145555 master-0 kubenswrapper[27819]: I0319 09:44:34.145422 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:34.234972 master-0 kubenswrapper[27819]: I0319 09:44:34.234921 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:34.570321 master-0 kubenswrapper[27819]: I0319 09:44:34.570001 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq"] Mar 19 09:44:34.613080 master-0 kubenswrapper[27819]: I0319 09:44:34.613008 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:34.613218 master-0 kubenswrapper[27819]: E0319 09:44:34.613192 27819 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 19 09:44:34.613272 master-0 kubenswrapper[27819]: E0319 09:44:34.613256 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist podName:9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7 nodeName:}" failed. No retries permitted until 2026-03-19 09:44:35.613235709 +0000 UTC m=+660.534813401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist") pod "speaker-5frqz" (UID: "9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7") : secret "metallb-memberlist" not found Mar 19 09:44:34.660554 master-0 kubenswrapper[27819]: I0319 09:44:34.660470 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-mpwcm"] Mar 19 09:44:34.668996 master-0 kubenswrapper[27819]: W0319 09:44:34.668774 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f2f18a5_b624_43ce_86a9_001388cb1f54.slice/crio-d2a15f6a34297c3b2474b1d6e9efa6b69e3a5ff20ff2ad0b62d8cf5a59fc5b00 WatchSource:0}: Error finding container d2a15f6a34297c3b2474b1d6e9efa6b69e3a5ff20ff2ad0b62d8cf5a59fc5b00: Status 404 returned error can't find the container with id d2a15f6a34297c3b2474b1d6e9efa6b69e3a5ff20ff2ad0b62d8cf5a59fc5b00 Mar 19 09:44:35.238614 master-0 kubenswrapper[27819]: I0319 09:44:35.238525 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-mpwcm" event={"ID":"6f2f18a5-b624-43ce-86a9-001388cb1f54","Type":"ContainerStarted","Data":"b8347a630f86b07149067f0fc57c7249491aefe4623b97c5c9138ca2b1ad9a33"} Mar 19 09:44:35.238614 master-0 kubenswrapper[27819]: I0319 09:44:35.238602 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-mpwcm" event={"ID":"6f2f18a5-b624-43ce-86a9-001388cb1f54","Type":"ContainerStarted","Data":"d2a15f6a34297c3b2474b1d6e9efa6b69e3a5ff20ff2ad0b62d8cf5a59fc5b00"} Mar 19 09:44:35.239768 master-0 kubenswrapper[27819]: I0319 09:44:35.239708 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerStarted","Data":"c401851116b7c1d46afe3222e59bb57759fa43f1ee99e25bfdc389b9abc4603f"} Mar 19 09:44:35.240681 master-0 kubenswrapper[27819]: I0319 09:44:35.240645 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" event={"ID":"300b82db-40bd-4206-bb4e-11db3ac40342","Type":"ContainerStarted","Data":"86e30565bc175feb0389e7fd3a666a87bcd3727b1a6bec912f1c9aebaf5d5422"} Mar 19 09:44:35.627859 master-0 kubenswrapper[27819]: I0319 09:44:35.627815 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:35.631193 master-0 kubenswrapper[27819]: I0319 09:44:35.631162 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7-memberlist\") pod \"speaker-5frqz\" (UID: \"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7\") " pod="metallb-system/speaker-5frqz" Mar 19 09:44:35.715099 master-0 kubenswrapper[27819]: I0319 09:44:35.715032 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-5frqz" Mar 19 09:44:35.741579 master-0 kubenswrapper[27819]: W0319 09:44:35.741440 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b1d99a2_4df5_447a_bcb7_8ebf2684e1d7.slice/crio-c602d5f6217f5e25e9b76f17b61c5762a823a2af4da5abc31f9ebc004c0be81b WatchSource:0}: Error finding container c602d5f6217f5e25e9b76f17b61c5762a823a2af4da5abc31f9ebc004c0be81b: Status 404 returned error can't find the container with id c602d5f6217f5e25e9b76f17b61c5762a823a2af4da5abc31f9ebc004c0be81b Mar 19 09:44:36.013614 master-0 kubenswrapper[27819]: I0319 09:44:36.013210 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8"] Mar 19 09:44:36.015479 master-0 kubenswrapper[27819]: I0319 09:44:36.015437 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" Mar 19 09:44:36.034427 master-0 kubenswrapper[27819]: I0319 09:44:36.033989 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms"] Mar 19 09:44:36.036893 master-0 kubenswrapper[27819]: I0319 09:44:36.035017 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.046555 master-0 kubenswrapper[27819]: I0319 09:44:36.038156 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 19 09:44:36.046555 master-0 kubenswrapper[27819]: I0319 09:44:36.042959 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8"] Mar 19 09:44:36.050610 master-0 kubenswrapper[27819]: I0319 09:44:36.050504 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-46v4t"] Mar 19 09:44:36.051602 master-0 kubenswrapper[27819]: I0319 09:44:36.051578 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.108097 master-0 kubenswrapper[27819]: I0319 09:44:36.107849 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms"] Mar 19 09:44:36.139222 master-0 kubenswrapper[27819]: I0319 09:44:36.137787 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/027697b5-ed73-483f-b112-4a3500b7d6e4-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jf6ms\" (UID: \"027697b5-ed73-483f-b112-4a3500b7d6e4\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.139222 master-0 kubenswrapper[27819]: I0319 09:44:36.137869 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-ovs-socket\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.139222 master-0 kubenswrapper[27819]: I0319 09:44:36.137974 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrtb\" (UniqueName: \"kubernetes.io/projected/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-kube-api-access-bhrtb\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.139222 master-0 kubenswrapper[27819]: I0319 09:44:36.138039 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-dbus-socket\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.144569 master-0 kubenswrapper[27819]: I0319 09:44:36.144189 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-nmstate-lock\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.144569 master-0 kubenswrapper[27819]: I0319 09:44:36.144403 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfjf5\" (UniqueName: \"kubernetes.io/projected/027697b5-ed73-483f-b112-4a3500b7d6e4-kube-api-access-rfjf5\") pod \"nmstate-webhook-5f558f5558-jf6ms\" (UID: \"027697b5-ed73-483f-b112-4a3500b7d6e4\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.144569 master-0 kubenswrapper[27819]: I0319 09:44:36.144437 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tfw\" (UniqueName: \"kubernetes.io/projected/e2a31567-4e56-48ea-97e9-16abfd79c17c-kube-api-access-w5tfw\") pod \"nmstate-metrics-9b8c8685d-7lpb8\" (UID: \"e2a31567-4e56-48ea-97e9-16abfd79c17c\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" Mar 19 09:44:36.215165 master-0 kubenswrapper[27819]: I0319 09:44:36.214198 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9"] Mar 19 09:44:36.216926 master-0 kubenswrapper[27819]: I0319 09:44:36.216891 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.223553 master-0 kubenswrapper[27819]: I0319 09:44:36.221877 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 19 09:44:36.223553 master-0 kubenswrapper[27819]: I0319 09:44:36.222082 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 19 09:44:36.244065 master-0 kubenswrapper[27819]: I0319 09:44:36.233151 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9"] Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.245692 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-nmstate-lock\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.245750 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfjf5\" (UniqueName: \"kubernetes.io/projected/027697b5-ed73-483f-b112-4a3500b7d6e4-kube-api-access-rfjf5\") pod \"nmstate-webhook-5f558f5558-jf6ms\" (UID: \"027697b5-ed73-483f-b112-4a3500b7d6e4\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.245802 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-nmstate-lock\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.245783 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5tfw\" (UniqueName: \"kubernetes.io/projected/e2a31567-4e56-48ea-97e9-16abfd79c17c-kube-api-access-w5tfw\") pod \"nmstate-metrics-9b8c8685d-7lpb8\" (UID: \"e2a31567-4e56-48ea-97e9-16abfd79c17c\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.245903 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/027697b5-ed73-483f-b112-4a3500b7d6e4-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jf6ms\" (UID: \"027697b5-ed73-483f-b112-4a3500b7d6e4\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.245955 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-ovs-socket\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.245986 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrtb\" (UniqueName: \"kubernetes.io/projected/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-kube-api-access-bhrtb\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.246015 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-dbus-socket\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.246144 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-dbus-socket\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.247579 master-0 kubenswrapper[27819]: I0319 09:44:36.246196 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-ovs-socket\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.273651 master-0 kubenswrapper[27819]: I0319 09:44:36.249226 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/027697b5-ed73-483f-b112-4a3500b7d6e4-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jf6ms\" (UID: \"027697b5-ed73-483f-b112-4a3500b7d6e4\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.273651 master-0 kubenswrapper[27819]: I0319 09:44:36.266000 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5tfw\" (UniqueName: \"kubernetes.io/projected/e2a31567-4e56-48ea-97e9-16abfd79c17c-kube-api-access-w5tfw\") pod \"nmstate-metrics-9b8c8685d-7lpb8\" (UID: \"e2a31567-4e56-48ea-97e9-16abfd79c17c\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" Mar 19 09:44:36.273651 master-0 kubenswrapper[27819]: I0319 09:44:36.266033 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfjf5\" (UniqueName: \"kubernetes.io/projected/027697b5-ed73-483f-b112-4a3500b7d6e4-kube-api-access-rfjf5\") pod \"nmstate-webhook-5f558f5558-jf6ms\" (UID: \"027697b5-ed73-483f-b112-4a3500b7d6e4\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.273651 master-0 kubenswrapper[27819]: I0319 09:44:36.266199 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5frqz" event={"ID":"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7","Type":"ContainerStarted","Data":"33378df241dba3679013dbc7723799287aab02842ecad9bb4208c22eacd6127a"} Mar 19 09:44:36.273651 master-0 kubenswrapper[27819]: I0319 09:44:36.266235 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5frqz" event={"ID":"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7","Type":"ContainerStarted","Data":"c602d5f6217f5e25e9b76f17b61c5762a823a2af4da5abc31f9ebc004c0be81b"} Mar 19 09:44:36.273651 master-0 kubenswrapper[27819]: I0319 09:44:36.267088 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrtb\" (UniqueName: \"kubernetes.io/projected/d6f86547-cdd9-4d2b-b761-d1cb3bbb984f-kube-api-access-bhrtb\") pod \"nmstate-handler-46v4t\" (UID: \"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f\") " pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.349177 master-0 kubenswrapper[27819]: I0319 09:44:36.348421 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlsv\" (UniqueName: \"kubernetes.io/projected/11c4fb88-0ede-4120-af8b-c2f2422bd326-kube-api-access-bwlsv\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.349177 master-0 kubenswrapper[27819]: I0319 09:44:36.348473 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11c4fb88-0ede-4120-af8b-c2f2422bd326-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.349177 master-0 kubenswrapper[27819]: I0319 09:44:36.348960 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11c4fb88-0ede-4120-af8b-c2f2422bd326-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.425771 master-0 kubenswrapper[27819]: I0319 09:44:36.425726 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" Mar 19 09:44:36.441765 master-0 kubenswrapper[27819]: I0319 09:44:36.441645 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:36.454887 master-0 kubenswrapper[27819]: I0319 09:44:36.450149 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11c4fb88-0ede-4120-af8b-c2f2422bd326-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.454887 master-0 kubenswrapper[27819]: I0319 09:44:36.450198 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwlsv\" (UniqueName: \"kubernetes.io/projected/11c4fb88-0ede-4120-af8b-c2f2422bd326-kube-api-access-bwlsv\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.454887 master-0 kubenswrapper[27819]: I0319 09:44:36.450214 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-795f8f9784-4k4wd"] Mar 19 09:44:36.454887 master-0 kubenswrapper[27819]: I0319 09:44:36.450398 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11c4fb88-0ede-4120-af8b-c2f2422bd326-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.454887 master-0 kubenswrapper[27819]: I0319 09:44:36.451465 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.454887 master-0 kubenswrapper[27819]: I0319 09:44:36.453004 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:36.454887 master-0 kubenswrapper[27819]: I0319 09:44:36.453631 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/11c4fb88-0ede-4120-af8b-c2f2422bd326-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.456418 master-0 kubenswrapper[27819]: I0319 09:44:36.456328 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/11c4fb88-0ede-4120-af8b-c2f2422bd326-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.482519 master-0 kubenswrapper[27819]: I0319 09:44:36.482368 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwlsv\" (UniqueName: \"kubernetes.io/projected/11c4fb88-0ede-4120-af8b-c2f2422bd326-kube-api-access-bwlsv\") pod \"nmstate-console-plugin-86f58fcf4-cttv9\" (UID: \"11c4fb88-0ede-4120-af8b-c2f2422bd326\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.486889 master-0 kubenswrapper[27819]: I0319 09:44:36.486839 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-795f8f9784-4k4wd"] Mar 19 09:44:36.542986 master-0 kubenswrapper[27819]: I0319 09:44:36.542921 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" Mar 19 09:44:36.552489 master-0 kubenswrapper[27819]: I0319 09:44:36.552422 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-oauth-serving-cert\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.552722 master-0 kubenswrapper[27819]: I0319 09:44:36.552530 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-service-ca\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.552722 master-0 kubenswrapper[27819]: I0319 09:44:36.552611 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-console-config\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.552722 master-0 kubenswrapper[27819]: I0319 09:44:36.552660 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e502fdfc-46f9-4129-afc5-df7e04efe586-console-oauth-config\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.552722 master-0 kubenswrapper[27819]: I0319 09:44:36.552688 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e502fdfc-46f9-4129-afc5-df7e04efe586-console-serving-cert\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.552722 master-0 kubenswrapper[27819]: I0319 09:44:36.552714 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k242l\" (UniqueName: \"kubernetes.io/projected/e502fdfc-46f9-4129-afc5-df7e04efe586-kube-api-access-k242l\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.552967 master-0 kubenswrapper[27819]: I0319 09:44:36.552736 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-trusted-ca-bundle\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.654988 master-0 kubenswrapper[27819]: I0319 09:44:36.654922 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-console-config\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.655167 master-0 kubenswrapper[27819]: I0319 09:44:36.655100 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e502fdfc-46f9-4129-afc5-df7e04efe586-console-oauth-config\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.655167 master-0 kubenswrapper[27819]: I0319 09:44:36.655154 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e502fdfc-46f9-4129-afc5-df7e04efe586-console-serving-cert\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.656995 master-0 kubenswrapper[27819]: W0319 09:44:36.656919 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6f86547_cdd9_4d2b_b761_d1cb3bbb984f.slice/crio-03d8fca22ba66bdd0c809a47aa9c5009ef0eb6aa3ee75ffe8ddd8efd50dfdcf0 WatchSource:0}: Error finding container 03d8fca22ba66bdd0c809a47aa9c5009ef0eb6aa3ee75ffe8ddd8efd50dfdcf0: Status 404 returned error can't find the container with id 03d8fca22ba66bdd0c809a47aa9c5009ef0eb6aa3ee75ffe8ddd8efd50dfdcf0 Mar 19 09:44:36.657156 master-0 kubenswrapper[27819]: I0319 09:44:36.657032 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k242l\" (UniqueName: \"kubernetes.io/projected/e502fdfc-46f9-4129-afc5-df7e04efe586-kube-api-access-k242l\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.657156 master-0 kubenswrapper[27819]: I0319 09:44:36.657072 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-trusted-ca-bundle\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.657254 master-0 kubenswrapper[27819]: I0319 09:44:36.657193 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-oauth-serving-cert\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.657351 master-0 kubenswrapper[27819]: I0319 09:44:36.657311 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-service-ca\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.658867 master-0 kubenswrapper[27819]: I0319 09:44:36.658840 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e502fdfc-46f9-4129-afc5-df7e04efe586-console-serving-cert\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.659726 master-0 kubenswrapper[27819]: I0319 09:44:36.659252 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-console-config\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.659726 master-0 kubenswrapper[27819]: I0319 09:44:36.659659 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-oauth-serving-cert\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.660008 master-0 kubenswrapper[27819]: I0319 09:44:36.659775 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e502fdfc-46f9-4129-afc5-df7e04efe586-console-oauth-config\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.660008 master-0 kubenswrapper[27819]: I0319 09:44:36.659987 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-service-ca\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.661216 master-0 kubenswrapper[27819]: I0319 09:44:36.661177 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e502fdfc-46f9-4129-afc5-df7e04efe586-trusted-ca-bundle\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.677676 master-0 kubenswrapper[27819]: I0319 09:44:36.675789 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k242l\" (UniqueName: \"kubernetes.io/projected/e502fdfc-46f9-4129-afc5-df7e04efe586-kube-api-access-k242l\") pod \"console-795f8f9784-4k4wd\" (UID: \"e502fdfc-46f9-4129-afc5-df7e04efe586\") " pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:36.828330 master-0 kubenswrapper[27819]: I0319 09:44:36.828220 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:37.157081 master-0 kubenswrapper[27819]: I0319 09:44:37.157021 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8"] Mar 19 09:44:37.181501 master-0 kubenswrapper[27819]: W0319 09:44:37.181418 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod027697b5_ed73_483f_b112_4a3500b7d6e4.slice/crio-8d38c7ade7782dcb075bb6b76ec4455e66a6285c1a3ef12e0714c5a01e81e9d5 WatchSource:0}: Error finding container 8d38c7ade7782dcb075bb6b76ec4455e66a6285c1a3ef12e0714c5a01e81e9d5: Status 404 returned error can't find the container with id 8d38c7ade7782dcb075bb6b76ec4455e66a6285c1a3ef12e0714c5a01e81e9d5 Mar 19 09:44:37.195103 master-0 kubenswrapper[27819]: I0319 09:44:37.195035 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms"] Mar 19 09:44:37.254939 master-0 kubenswrapper[27819]: I0319 09:44:37.254680 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9"] Mar 19 09:44:37.293863 master-0 kubenswrapper[27819]: I0319 09:44:37.293299 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-46v4t" event={"ID":"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f","Type":"ContainerStarted","Data":"03d8fca22ba66bdd0c809a47aa9c5009ef0eb6aa3ee75ffe8ddd8efd50dfdcf0"} Mar 19 09:44:37.293863 master-0 kubenswrapper[27819]: I0319 09:44:37.293343 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" event={"ID":"11c4fb88-0ede-4120-af8b-c2f2422bd326","Type":"ContainerStarted","Data":"f8339972f60d5048512f67c39b8e00249fbaad89ed8954e52413ae6ddafac62e"} Mar 19 09:44:37.293863 master-0 kubenswrapper[27819]: I0319 09:44:37.293358 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" event={"ID":"e2a31567-4e56-48ea-97e9-16abfd79c17c","Type":"ContainerStarted","Data":"9e36f29dbd9e3668394f5d8bce1b7121a025ce8b63e6538d61ace944261a1ee4"} Mar 19 09:44:37.293863 master-0 kubenswrapper[27819]: I0319 09:44:37.293371 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-mpwcm" event={"ID":"6f2f18a5-b624-43ce-86a9-001388cb1f54","Type":"ContainerStarted","Data":"bcaafaa726a38c4bcadc6482db50a326b6a012ca9b61b1e7fde60eec51971a0d"} Mar 19 09:44:37.293863 master-0 kubenswrapper[27819]: I0319 09:44:37.293388 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:37.298582 master-0 kubenswrapper[27819]: I0319 09:44:37.298472 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-5frqz" event={"ID":"9b1d99a2-4df5-447a-bcb7-8ebf2684e1d7","Type":"ContainerStarted","Data":"9c7285b5cfb2406ce823572d1353b688c92a93e88fc2a6eb11e26b47dded2419"} Mar 19 09:44:37.299305 master-0 kubenswrapper[27819]: I0319 09:44:37.299242 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-5frqz" Mar 19 09:44:37.304678 master-0 kubenswrapper[27819]: I0319 09:44:37.304323 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" event={"ID":"027697b5-ed73-483f-b112-4a3500b7d6e4","Type":"ContainerStarted","Data":"8d38c7ade7782dcb075bb6b76ec4455e66a6285c1a3ef12e0714c5a01e81e9d5"} Mar 19 09:44:37.320421 master-0 kubenswrapper[27819]: I0319 09:44:37.320290 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-mpwcm" podStartSLOduration=2.9518808119999997 podStartE2EDuration="4.32026934s" podCreationTimestamp="2026-03-19 09:44:33 +0000 UTC" firstStartedPulling="2026-03-19 09:44:34.795598956 +0000 UTC m=+659.717176648" lastFinishedPulling="2026-03-19 09:44:36.163987484 +0000 UTC m=+661.085565176" observedRunningTime="2026-03-19 09:44:37.316742995 +0000 UTC m=+662.238320697" watchObservedRunningTime="2026-03-19 09:44:37.32026934 +0000 UTC m=+662.241847032" Mar 19 09:44:37.332839 master-0 kubenswrapper[27819]: W0319 09:44:37.332797 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode502fdfc_46f9_4129_afc5_df7e04efe586.slice/crio-6075f26020b094d40b469e531da4fd4e25098cb025ffd05cf55ee38840c971fc WatchSource:0}: Error finding container 6075f26020b094d40b469e531da4fd4e25098cb025ffd05cf55ee38840c971fc: Status 404 returned error can't find the container with id 6075f26020b094d40b469e531da4fd4e25098cb025ffd05cf55ee38840c971fc Mar 19 09:44:37.338169 master-0 kubenswrapper[27819]: I0319 09:44:37.338130 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-795f8f9784-4k4wd"] Mar 19 09:44:37.346342 master-0 kubenswrapper[27819]: I0319 09:44:37.346287 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-5frqz" podStartSLOduration=4.346265863 podStartE2EDuration="4.346265863s" podCreationTimestamp="2026-03-19 09:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:44:37.342026688 +0000 UTC m=+662.263604390" watchObservedRunningTime="2026-03-19 09:44:37.346265863 +0000 UTC m=+662.267843555" Mar 19 09:44:38.319242 master-0 kubenswrapper[27819]: I0319 09:44:38.319149 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795f8f9784-4k4wd" event={"ID":"e502fdfc-46f9-4129-afc5-df7e04efe586","Type":"ContainerStarted","Data":"f18e8e0ffe4bda8fb5fd4469350a569ad1d8f821675c5bcb67ffb8d16b12a4fd"} Mar 19 09:44:38.319242 master-0 kubenswrapper[27819]: I0319 09:44:38.319195 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795f8f9784-4k4wd" event={"ID":"e502fdfc-46f9-4129-afc5-df7e04efe586","Type":"ContainerStarted","Data":"6075f26020b094d40b469e531da4fd4e25098cb025ffd05cf55ee38840c971fc"} Mar 19 09:44:38.342989 master-0 kubenswrapper[27819]: I0319 09:44:38.342919 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-795f8f9784-4k4wd" podStartSLOduration=2.342896677 podStartE2EDuration="2.342896677s" podCreationTimestamp="2026-03-19 09:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:44:38.338231342 +0000 UTC m=+663.259809034" watchObservedRunningTime="2026-03-19 09:44:38.342896677 +0000 UTC m=+663.264474369" Mar 19 09:44:42.351996 master-0 kubenswrapper[27819]: I0319 09:44:42.351798 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" event={"ID":"027697b5-ed73-483f-b112-4a3500b7d6e4","Type":"ContainerStarted","Data":"97e57c4c8f4130a625c8c5b51fca9128cca21cfc166158e9ed91fcb21b7b88fa"} Mar 19 09:44:42.351996 master-0 kubenswrapper[27819]: I0319 09:44:42.351889 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:44:42.354614 master-0 kubenswrapper[27819]: I0319 09:44:42.354567 27819 generic.go:334] "Generic (PLEG): container finished" podID="8d960bf6-0166-4e00-8ba8-ccc61b696d46" containerID="40187e99c36940ca4dd6fadaf28f040803640d44539ce7dc25236189a50e6949" exitCode=0 Mar 19 09:44:42.354996 master-0 kubenswrapper[27819]: I0319 09:44:42.354783 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerDied","Data":"40187e99c36940ca4dd6fadaf28f040803640d44539ce7dc25236189a50e6949"} Mar 19 09:44:42.358247 master-0 kubenswrapper[27819]: I0319 09:44:42.358027 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-46v4t" event={"ID":"d6f86547-cdd9-4d2b-b761-d1cb3bbb984f","Type":"ContainerStarted","Data":"3b31905e4cf882f159685c994ecfae69e12830943db682177cae69cba41895a7"} Mar 19 09:44:42.358247 master-0 kubenswrapper[27819]: I0319 09:44:42.358166 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:42.361302 master-0 kubenswrapper[27819]: I0319 09:44:42.359742 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" event={"ID":"11c4fb88-0ede-4120-af8b-c2f2422bd326","Type":"ContainerStarted","Data":"61ff8f9574acd27b2df75b8c00da30bb1142ff13063cb46516ea1d6757a972f4"} Mar 19 09:44:42.364468 master-0 kubenswrapper[27819]: I0319 09:44:42.363941 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" event={"ID":"e2a31567-4e56-48ea-97e9-16abfd79c17c","Type":"ContainerStarted","Data":"cf9b7dc2430647e2ad2a64a54dd46b4d669270525e874f7a3105aa5469abf5be"} Mar 19 09:44:42.364468 master-0 kubenswrapper[27819]: I0319 09:44:42.364001 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" event={"ID":"e2a31567-4e56-48ea-97e9-16abfd79c17c","Type":"ContainerStarted","Data":"ab55ac77726ff862e52f8707fc82fd6eae69f5612d302813be3322f69d5f4638"} Mar 19 09:44:42.365383 master-0 kubenswrapper[27819]: I0319 09:44:42.365357 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" event={"ID":"300b82db-40bd-4206-bb4e-11db3ac40342","Type":"ContainerStarted","Data":"4ea2eb471f1bfa8cdf40c18ca5bf5c40d3e8d14d8151be949e4046ecceb87aa6"} Mar 19 09:44:42.365594 master-0 kubenswrapper[27819]: I0319 09:44:42.365532 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:42.384346 master-0 kubenswrapper[27819]: I0319 09:44:42.384104 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" podStartSLOduration=2.8201642749999998 podStartE2EDuration="7.384082572s" podCreationTimestamp="2026-03-19 09:44:35 +0000 UTC" firstStartedPulling="2026-03-19 09:44:37.197186515 +0000 UTC m=+662.118764207" lastFinishedPulling="2026-03-19 09:44:41.761104812 +0000 UTC m=+666.682682504" observedRunningTime="2026-03-19 09:44:42.380010511 +0000 UTC m=+667.301588223" watchObservedRunningTime="2026-03-19 09:44:42.384082572 +0000 UTC m=+667.305660264" Mar 19 09:44:42.433776 master-0 kubenswrapper[27819]: I0319 09:44:42.433699 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-cttv9" podStartSLOduration=1.940747643 podStartE2EDuration="6.433675322s" podCreationTimestamp="2026-03-19 09:44:36 +0000 UTC" firstStartedPulling="2026-03-19 09:44:37.269815227 +0000 UTC m=+662.191392919" lastFinishedPulling="2026-03-19 09:44:41.762742906 +0000 UTC m=+666.684320598" observedRunningTime="2026-03-19 09:44:42.431846492 +0000 UTC m=+667.353424184" watchObservedRunningTime="2026-03-19 09:44:42.433675322 +0000 UTC m=+667.355253014" Mar 19 09:44:42.498604 master-0 kubenswrapper[27819]: I0319 09:44:42.496621 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-46v4t" podStartSLOduration=1.396169421 podStartE2EDuration="6.496604982s" podCreationTimestamp="2026-03-19 09:44:36 +0000 UTC" firstStartedPulling="2026-03-19 09:44:36.660718663 +0000 UTC m=+661.582296355" lastFinishedPulling="2026-03-19 09:44:41.761154224 +0000 UTC m=+666.682731916" observedRunningTime="2026-03-19 09:44:42.473361584 +0000 UTC m=+667.394939276" watchObservedRunningTime="2026-03-19 09:44:42.496604982 +0000 UTC m=+667.418182674" Mar 19 09:44:42.516128 master-0 kubenswrapper[27819]: I0319 09:44:42.515742 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" podStartSLOduration=2.323352224 podStartE2EDuration="9.515722018s" podCreationTimestamp="2026-03-19 09:44:33 +0000 UTC" firstStartedPulling="2026-03-19 09:44:34.570230108 +0000 UTC m=+659.491807800" lastFinishedPulling="2026-03-19 09:44:41.762599902 +0000 UTC m=+666.684177594" observedRunningTime="2026-03-19 09:44:42.514390072 +0000 UTC m=+667.435967764" watchObservedRunningTime="2026-03-19 09:44:42.515722018 +0000 UTC m=+667.437299720" Mar 19 09:44:42.534877 master-0 kubenswrapper[27819]: I0319 09:44:42.534758 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-7lpb8" podStartSLOduration=2.949073417 podStartE2EDuration="7.534743082s" podCreationTimestamp="2026-03-19 09:44:35 +0000 UTC" firstStartedPulling="2026-03-19 09:44:37.174728818 +0000 UTC m=+662.096306510" lastFinishedPulling="2026-03-19 09:44:41.760398483 +0000 UTC m=+666.681976175" observedRunningTime="2026-03-19 09:44:42.533592991 +0000 UTC m=+667.455170683" watchObservedRunningTime="2026-03-19 09:44:42.534743082 +0000 UTC m=+667.456320774" Mar 19 09:44:43.382672 master-0 kubenswrapper[27819]: I0319 09:44:43.382618 27819 generic.go:334] "Generic (PLEG): container finished" podID="8d960bf6-0166-4e00-8ba8-ccc61b696d46" containerID="071792ceae73e0a054bd9090d3ca4a3beb2e3f07ddcb9c50debba4f991b1fe54" exitCode=0 Mar 19 09:44:43.383357 master-0 kubenswrapper[27819]: I0319 09:44:43.382721 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerDied","Data":"071792ceae73e0a054bd9090d3ca4a3beb2e3f07ddcb9c50debba4f991b1fe54"} Mar 19 09:44:44.238458 master-0 kubenswrapper[27819]: I0319 09:44:44.238415 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-mpwcm" Mar 19 09:44:44.397566 master-0 kubenswrapper[27819]: I0319 09:44:44.397466 27819 generic.go:334] "Generic (PLEG): container finished" podID="8d960bf6-0166-4e00-8ba8-ccc61b696d46" containerID="6a9f489ca0b4d9d63b621bdf5974187d52b04b986436cc6b4c06ec4223816667" exitCode=0 Mar 19 09:44:44.398195 master-0 kubenswrapper[27819]: I0319 09:44:44.397577 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerDied","Data":"6a9f489ca0b4d9d63b621bdf5974187d52b04b986436cc6b4c06ec4223816667"} Mar 19 09:44:45.409624 master-0 kubenswrapper[27819]: I0319 09:44:45.409570 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerStarted","Data":"baa24723eda0956e2083fc04a32b3703f299a9654a8be2ea34062a176c1c478f"} Mar 19 09:44:45.409624 master-0 kubenswrapper[27819]: I0319 09:44:45.409628 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerStarted","Data":"5267db1556368f6904d71979bfe6f9ceff3c6ee043bd123f4de962fac8ddc819"} Mar 19 09:44:45.410139 master-0 kubenswrapper[27819]: I0319 09:44:45.409640 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerStarted","Data":"2e114e451b3de417284b3f4163df5fe221272845bbc241b3146f4c5b42bf2914"} Mar 19 09:44:45.410139 master-0 kubenswrapper[27819]: I0319 09:44:45.409649 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerStarted","Data":"70cc6446597d59217cd7b28f4fa6bd63f1bdc1781012ab34657445546a41c14d"} Mar 19 09:44:45.410139 master-0 kubenswrapper[27819]: I0319 09:44:45.409659 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerStarted","Data":"f3a96667fb51d49c262bac0b10bffef11195ef432ee18a090cb6aee7ae2fb156"} Mar 19 09:44:45.410139 master-0 kubenswrapper[27819]: I0319 09:44:45.409668 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j4z46" event={"ID":"8d960bf6-0166-4e00-8ba8-ccc61b696d46","Type":"ContainerStarted","Data":"2b5d47080201d501cb7bc6297e55629d98be6f92ed82563fe74e88590661e0ab"} Mar 19 09:44:45.410139 master-0 kubenswrapper[27819]: I0319 09:44:45.409926 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:45.436632 master-0 kubenswrapper[27819]: I0319 09:44:45.436505 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-j4z46" podStartSLOduration=4.94752348 podStartE2EDuration="12.436479437s" podCreationTimestamp="2026-03-19 09:44:33 +0000 UTC" firstStartedPulling="2026-03-19 09:44:34.270749687 +0000 UTC m=+659.192327369" lastFinishedPulling="2026-03-19 09:44:41.759705634 +0000 UTC m=+666.681283326" observedRunningTime="2026-03-19 09:44:45.430696577 +0000 UTC m=+670.352274279" watchObservedRunningTime="2026-03-19 09:44:45.436479437 +0000 UTC m=+670.358057129" Mar 19 09:44:46.829635 master-0 kubenswrapper[27819]: I0319 09:44:46.829518 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:46.830170 master-0 kubenswrapper[27819]: I0319 09:44:46.829648 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:46.836334 master-0 kubenswrapper[27819]: I0319 09:44:46.836283 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:47.437059 master-0 kubenswrapper[27819]: I0319 09:44:47.436998 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-795f8f9784-4k4wd" Mar 19 09:44:47.541088 master-0 kubenswrapper[27819]: I0319 09:44:47.541025 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bcfd95575-qmxs7"] Mar 19 09:44:49.146603 master-0 kubenswrapper[27819]: I0319 09:44:49.146482 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:49.198705 master-0 kubenswrapper[27819]: I0319 09:44:49.198625 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:51.477403 master-0 kubenswrapper[27819]: I0319 09:44:51.477352 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-46v4t" Mar 19 09:44:54.136037 master-0 kubenswrapper[27819]: I0319 09:44:54.135952 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-977gq" Mar 19 09:44:54.150042 master-0 kubenswrapper[27819]: I0319 09:44:54.149980 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-j4z46" Mar 19 09:44:55.719818 master-0 kubenswrapper[27819]: I0319 09:44:55.719727 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-5frqz" Mar 19 09:44:56.449122 master-0 kubenswrapper[27819]: I0319 09:44:56.449023 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jf6ms" Mar 19 09:45:01.480583 master-0 kubenswrapper[27819]: I0319 09:45:01.480515 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-f2t8g"] Mar 19 09:45:01.481841 master-0 kubenswrapper[27819]: I0319 09:45:01.481811 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.487807 master-0 kubenswrapper[27819]: I0319 09:45:01.487749 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 19 09:45:01.499725 master-0 kubenswrapper[27819]: I0319 09:45:01.498241 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-f2t8g"] Mar 19 09:45:01.533610 master-0 kubenswrapper[27819]: I0319 09:45:01.533078 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-registration-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.533610 master-0 kubenswrapper[27819]: I0319 09:45:01.533295 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-file-lock-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.533610 master-0 kubenswrapper[27819]: I0319 09:45:01.533357 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-sys\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.533610 master-0 kubenswrapper[27819]: I0319 09:45:01.533385 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xkn\" (UniqueName: \"kubernetes.io/projected/410e5023-25e1-4a5b-98e8-02a2999b84ce-kube-api-access-68xkn\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.533610 master-0 kubenswrapper[27819]: I0319 09:45:01.533460 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-run-udev\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.533610 master-0 kubenswrapper[27819]: I0319 09:45:01.533506 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-device-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.534043 master-0 kubenswrapper[27819]: I0319 09:45:01.533732 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-csi-plugin-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.534043 master-0 kubenswrapper[27819]: I0319 09:45:01.533767 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-lvmd-config\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.534043 master-0 kubenswrapper[27819]: I0319 09:45:01.533810 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-node-plugin-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.534043 master-0 kubenswrapper[27819]: I0319 09:45:01.533905 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/410e5023-25e1-4a5b-98e8-02a2999b84ce-metrics-cert\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.534043 master-0 kubenswrapper[27819]: I0319 09:45:01.533962 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-pod-volumes-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.635766 master-0 kubenswrapper[27819]: I0319 09:45:01.635634 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-device-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.635766 master-0 kubenswrapper[27819]: I0319 09:45:01.635722 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-csi-plugin-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.635766 master-0 kubenswrapper[27819]: I0319 09:45:01.635742 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-lvmd-config\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.635766 master-0 kubenswrapper[27819]: I0319 09:45:01.635771 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-node-plugin-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.635796 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/410e5023-25e1-4a5b-98e8-02a2999b84ce-metrics-cert\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.635820 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-pod-volumes-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.635838 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-registration-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.635870 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-file-lock-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.635891 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-sys\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.635906 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xkn\" (UniqueName: \"kubernetes.io/projected/410e5023-25e1-4a5b-98e8-02a2999b84ce-kube-api-access-68xkn\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.635934 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-run-udev\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.636005 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-run-udev\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.636061 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-device-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.636285 master-0 kubenswrapper[27819]: I0319 09:45:01.636247 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-csi-plugin-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.637055 master-0 kubenswrapper[27819]: I0319 09:45:01.636357 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-lvmd-config\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.637055 master-0 kubenswrapper[27819]: I0319 09:45:01.636484 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-node-plugin-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.639113 master-0 kubenswrapper[27819]: I0319 09:45:01.639063 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-file-lock-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.639234 master-0 kubenswrapper[27819]: I0319 09:45:01.639130 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-pod-volumes-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.639234 master-0 kubenswrapper[27819]: I0319 09:45:01.639166 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-registration-dir\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.639234 master-0 kubenswrapper[27819]: I0319 09:45:01.639190 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/410e5023-25e1-4a5b-98e8-02a2999b84ce-sys\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.639974 master-0 kubenswrapper[27819]: I0319 09:45:01.639931 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/410e5023-25e1-4a5b-98e8-02a2999b84ce-metrics-cert\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.655943 master-0 kubenswrapper[27819]: I0319 09:45:01.655884 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xkn\" (UniqueName: \"kubernetes.io/projected/410e5023-25e1-4a5b-98e8-02a2999b84ce-kube-api-access-68xkn\") pod \"vg-manager-f2t8g\" (UID: \"410e5023-25e1-4a5b-98e8-02a2999b84ce\") " pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:01.829137 master-0 kubenswrapper[27819]: I0319 09:45:01.828968 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:02.294960 master-0 kubenswrapper[27819]: I0319 09:45:02.294895 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-f2t8g"] Mar 19 09:45:02.305341 master-0 kubenswrapper[27819]: W0319 09:45:02.305279 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod410e5023_25e1_4a5b_98e8_02a2999b84ce.slice/crio-94d93d303d2e8415ca4123627b3342d6a010a5a01f340786f829820d45b50147 WatchSource:0}: Error finding container 94d93d303d2e8415ca4123627b3342d6a010a5a01f340786f829820d45b50147: Status 404 returned error can't find the container with id 94d93d303d2e8415ca4123627b3342d6a010a5a01f340786f829820d45b50147 Mar 19 09:45:02.581152 master-0 kubenswrapper[27819]: I0319 09:45:02.580999 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f2t8g" event={"ID":"410e5023-25e1-4a5b-98e8-02a2999b84ce","Type":"ContainerStarted","Data":"6c1543dbd61626cc214024c509146a841b493ca725d333c6d4c99bc77bd66781"} Mar 19 09:45:02.581152 master-0 kubenswrapper[27819]: I0319 09:45:02.581061 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f2t8g" event={"ID":"410e5023-25e1-4a5b-98e8-02a2999b84ce","Type":"ContainerStarted","Data":"94d93d303d2e8415ca4123627b3342d6a010a5a01f340786f829820d45b50147"} Mar 19 09:45:02.603565 master-0 kubenswrapper[27819]: I0319 09:45:02.603478 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-f2t8g" podStartSLOduration=1.6034549999999999 podStartE2EDuration="1.603455s" podCreationTimestamp="2026-03-19 09:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:45:02.601819923 +0000 UTC m=+687.523397625" watchObservedRunningTime="2026-03-19 09:45:02.603455 +0000 UTC m=+687.525032712" Mar 19 09:45:04.620973 master-0 kubenswrapper[27819]: I0319 09:45:04.620901 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-f2t8g_410e5023-25e1-4a5b-98e8-02a2999b84ce/vg-manager/0.log" Mar 19 09:45:04.621597 master-0 kubenswrapper[27819]: I0319 09:45:04.620979 27819 generic.go:334] "Generic (PLEG): container finished" podID="410e5023-25e1-4a5b-98e8-02a2999b84ce" containerID="6c1543dbd61626cc214024c509146a841b493ca725d333c6d4c99bc77bd66781" exitCode=1 Mar 19 09:45:04.621597 master-0 kubenswrapper[27819]: I0319 09:45:04.621130 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f2t8g" event={"ID":"410e5023-25e1-4a5b-98e8-02a2999b84ce","Type":"ContainerDied","Data":"6c1543dbd61626cc214024c509146a841b493ca725d333c6d4c99bc77bd66781"} Mar 19 09:45:04.625280 master-0 kubenswrapper[27819]: I0319 09:45:04.624765 27819 scope.go:117] "RemoveContainer" containerID="6c1543dbd61626cc214024c509146a841b493ca725d333c6d4c99bc77bd66781" Mar 19 09:45:04.963217 master-0 kubenswrapper[27819]: I0319 09:45:04.962837 27819 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 19 09:45:05.515587 master-0 kubenswrapper[27819]: I0319 09:45:05.513268 27819 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-19T09:45:04.962867173Z","Handler":null,"Name":""} Mar 19 09:45:05.522626 master-0 kubenswrapper[27819]: I0319 09:45:05.516995 27819 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 19 09:45:05.522626 master-0 kubenswrapper[27819]: I0319 09:45:05.517065 27819 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 19 09:45:05.634254 master-0 kubenswrapper[27819]: I0319 09:45:05.634194 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-f2t8g_410e5023-25e1-4a5b-98e8-02a2999b84ce/vg-manager/0.log" Mar 19 09:45:05.634841 master-0 kubenswrapper[27819]: I0319 09:45:05.634282 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-f2t8g" event={"ID":"410e5023-25e1-4a5b-98e8-02a2999b84ce","Type":"ContainerStarted","Data":"a5d2fdbde7c0bb24b485f61ad2b62972a3101c3c43605365c9b4aae5a3777091"} Mar 19 09:45:08.305727 master-0 kubenswrapper[27819]: I0319 09:45:08.305649 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dnbrz"] Mar 19 09:45:08.307401 master-0 kubenswrapper[27819]: I0319 09:45:08.307367 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:08.314005 master-0 kubenswrapper[27819]: I0319 09:45:08.313951 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 19 09:45:08.314572 master-0 kubenswrapper[27819]: I0319 09:45:08.314521 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 19 09:45:08.334569 master-0 kubenswrapper[27819]: I0319 09:45:08.334074 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dnbrz"] Mar 19 09:45:08.373570 master-0 kubenswrapper[27819]: I0319 09:45:08.370959 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7kc\" (UniqueName: \"kubernetes.io/projected/328b6da5-cda9-44fc-9c67-577e290d389a-kube-api-access-gx7kc\") pod \"openstack-operator-index-dnbrz\" (UID: \"328b6da5-cda9-44fc-9c67-577e290d389a\") " pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:08.488483 master-0 kubenswrapper[27819]: I0319 09:45:08.488428 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7kc\" (UniqueName: \"kubernetes.io/projected/328b6da5-cda9-44fc-9c67-577e290d389a-kube-api-access-gx7kc\") pod \"openstack-operator-index-dnbrz\" (UID: \"328b6da5-cda9-44fc-9c67-577e290d389a\") " pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:08.553505 master-0 kubenswrapper[27819]: I0319 09:45:08.553126 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7kc\" (UniqueName: \"kubernetes.io/projected/328b6da5-cda9-44fc-9c67-577e290d389a-kube-api-access-gx7kc\") pod \"openstack-operator-index-dnbrz\" (UID: \"328b6da5-cda9-44fc-9c67-577e290d389a\") " pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:08.627705 master-0 kubenswrapper[27819]: I0319 09:45:08.627624 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:09.047068 master-0 kubenswrapper[27819]: I0319 09:45:09.045522 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dnbrz"] Mar 19 09:45:09.053729 master-0 kubenswrapper[27819]: W0319 09:45:09.053696 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod328b6da5_cda9_44fc_9c67_577e290d389a.slice/crio-343b835c0996475e1eaca3a2672e42c1cdb34be0fdd79eea4a097849b4596fff WatchSource:0}: Error finding container 343b835c0996475e1eaca3a2672e42c1cdb34be0fdd79eea4a097849b4596fff: Status 404 returned error can't find the container with id 343b835c0996475e1eaca3a2672e42c1cdb34be0fdd79eea4a097849b4596fff Mar 19 09:45:09.671214 master-0 kubenswrapper[27819]: I0319 09:45:09.671156 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dnbrz" event={"ID":"328b6da5-cda9-44fc-9c67-577e290d389a","Type":"ContainerStarted","Data":"343b835c0996475e1eaca3a2672e42c1cdb34be0fdd79eea4a097849b4596fff"} Mar 19 09:45:10.680500 master-0 kubenswrapper[27819]: I0319 09:45:10.680380 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dnbrz" event={"ID":"328b6da5-cda9-44fc-9c67-577e290d389a","Type":"ContainerStarted","Data":"f78c80fcbd280803cc7a792ee5f590d4e9ceebda4328a2f4813b8f72fdab89ef"} Mar 19 09:45:10.708733 master-0 kubenswrapper[27819]: I0319 09:45:10.708589 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dnbrz" podStartSLOduration=1.861736249 podStartE2EDuration="2.708568447s" podCreationTimestamp="2026-03-19 09:45:08 +0000 UTC" firstStartedPulling="2026-03-19 09:45:09.055296428 +0000 UTC m=+693.976874120" lastFinishedPulling="2026-03-19 09:45:09.902128626 +0000 UTC m=+694.823706318" observedRunningTime="2026-03-19 09:45:10.696186138 +0000 UTC m=+695.617763830" watchObservedRunningTime="2026-03-19 09:45:10.708568447 +0000 UTC m=+695.630146139" Mar 19 09:45:11.829680 master-0 kubenswrapper[27819]: I0319 09:45:11.829607 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:11.832191 master-0 kubenswrapper[27819]: I0319 09:45:11.832131 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:12.595896 master-0 kubenswrapper[27819]: I0319 09:45:12.595775 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6bcfd95575-qmxs7" podUID="306638cd-9ae4-48a3-a7d7-7e3b935df93f" containerName="console" containerID="cri-o://e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354" gracePeriod=15 Mar 19 09:45:12.697672 master-0 kubenswrapper[27819]: I0319 09:45:12.697601 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:12.698842 master-0 kubenswrapper[27819]: I0319 09:45:12.698783 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-f2t8g" Mar 19 09:45:13.143128 master-0 kubenswrapper[27819]: I0319 09:45:13.143081 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bcfd95575-qmxs7_306638cd-9ae4-48a3-a7d7-7e3b935df93f/console/0.log" Mar 19 09:45:13.143619 master-0 kubenswrapper[27819]: I0319 09:45:13.143155 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:45:13.270286 master-0 kubenswrapper[27819]: I0319 09:45:13.270206 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-config\") pod \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " Mar 19 09:45:13.270522 master-0 kubenswrapper[27819]: I0319 09:45:13.270320 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-trusted-ca-bundle\") pod \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " Mar 19 09:45:13.270522 master-0 kubenswrapper[27819]: I0319 09:45:13.270376 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-service-ca\") pod \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " Mar 19 09:45:13.270522 master-0 kubenswrapper[27819]: I0319 09:45:13.270426 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwz6q\" (UniqueName: \"kubernetes.io/projected/306638cd-9ae4-48a3-a7d7-7e3b935df93f-kube-api-access-nwz6q\") pod \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " Mar 19 09:45:13.270522 master-0 kubenswrapper[27819]: I0319 09:45:13.270481 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-serving-cert\") pod \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " Mar 19 09:45:13.270522 master-0 kubenswrapper[27819]: I0319 09:45:13.270502 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-oauth-config\") pod \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " Mar 19 09:45:13.270789 master-0 kubenswrapper[27819]: I0319 09:45:13.270585 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-oauth-serving-cert\") pod \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\" (UID: \"306638cd-9ae4-48a3-a7d7-7e3b935df93f\") " Mar 19 09:45:13.271232 master-0 kubenswrapper[27819]: I0319 09:45:13.271048 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-service-ca" (OuterVolumeSpecName: "service-ca") pod "306638cd-9ae4-48a3-a7d7-7e3b935df93f" (UID: "306638cd-9ae4-48a3-a7d7-7e3b935df93f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:45:13.271498 master-0 kubenswrapper[27819]: I0319 09:45:13.271444 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "306638cd-9ae4-48a3-a7d7-7e3b935df93f" (UID: "306638cd-9ae4-48a3-a7d7-7e3b935df93f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:45:13.271568 master-0 kubenswrapper[27819]: I0319 09:45:13.271466 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-config" (OuterVolumeSpecName: "console-config") pod "306638cd-9ae4-48a3-a7d7-7e3b935df93f" (UID: "306638cd-9ae4-48a3-a7d7-7e3b935df93f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:45:13.271896 master-0 kubenswrapper[27819]: I0319 09:45:13.271799 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "306638cd-9ae4-48a3-a7d7-7e3b935df93f" (UID: "306638cd-9ae4-48a3-a7d7-7e3b935df93f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:45:13.273500 master-0 kubenswrapper[27819]: I0319 09:45:13.273438 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "306638cd-9ae4-48a3-a7d7-7e3b935df93f" (UID: "306638cd-9ae4-48a3-a7d7-7e3b935df93f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:45:13.273570 master-0 kubenswrapper[27819]: I0319 09:45:13.273459 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306638cd-9ae4-48a3-a7d7-7e3b935df93f-kube-api-access-nwz6q" (OuterVolumeSpecName: "kube-api-access-nwz6q") pod "306638cd-9ae4-48a3-a7d7-7e3b935df93f" (UID: "306638cd-9ae4-48a3-a7d7-7e3b935df93f"). InnerVolumeSpecName "kube-api-access-nwz6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:45:13.274054 master-0 kubenswrapper[27819]: I0319 09:45:13.273986 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "306638cd-9ae4-48a3-a7d7-7e3b935df93f" (UID: "306638cd-9ae4-48a3-a7d7-7e3b935df93f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:45:13.373856 master-0 kubenswrapper[27819]: I0319 09:45:13.373796 27819 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:13.373856 master-0 kubenswrapper[27819]: I0319 09:45:13.373831 27819 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:13.373856 master-0 kubenswrapper[27819]: I0319 09:45:13.373843 27819 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:13.373856 master-0 kubenswrapper[27819]: I0319 09:45:13.373853 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwz6q\" (UniqueName: \"kubernetes.io/projected/306638cd-9ae4-48a3-a7d7-7e3b935df93f-kube-api-access-nwz6q\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:13.373856 master-0 kubenswrapper[27819]: I0319 09:45:13.373862 27819 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:13.373856 master-0 kubenswrapper[27819]: I0319 09:45:13.373872 27819 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/306638cd-9ae4-48a3-a7d7-7e3b935df93f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:13.374326 master-0 kubenswrapper[27819]: I0319 09:45:13.373882 27819 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/306638cd-9ae4-48a3-a7d7-7e3b935df93f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:13.711491 master-0 kubenswrapper[27819]: I0319 09:45:13.711351 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6bcfd95575-qmxs7_306638cd-9ae4-48a3-a7d7-7e3b935df93f/console/0.log" Mar 19 09:45:13.711992 master-0 kubenswrapper[27819]: I0319 09:45:13.711501 27819 generic.go:334] "Generic (PLEG): container finished" podID="306638cd-9ae4-48a3-a7d7-7e3b935df93f" containerID="e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354" exitCode=2 Mar 19 09:45:13.711992 master-0 kubenswrapper[27819]: I0319 09:45:13.711577 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcfd95575-qmxs7" event={"ID":"306638cd-9ae4-48a3-a7d7-7e3b935df93f","Type":"ContainerDied","Data":"e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354"} Mar 19 09:45:13.711992 master-0 kubenswrapper[27819]: I0319 09:45:13.711648 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcfd95575-qmxs7" event={"ID":"306638cd-9ae4-48a3-a7d7-7e3b935df93f","Type":"ContainerDied","Data":"47a0a22edb6a1a7570f9a898a3cefc844beff97d37fc8480116127fb9efa9655"} Mar 19 09:45:13.711992 master-0 kubenswrapper[27819]: I0319 09:45:13.711671 27819 scope.go:117] "RemoveContainer" containerID="e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354" Mar 19 09:45:13.711992 master-0 kubenswrapper[27819]: I0319 09:45:13.711596 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcfd95575-qmxs7" Mar 19 09:45:13.740972 master-0 kubenswrapper[27819]: I0319 09:45:13.740892 27819 scope.go:117] "RemoveContainer" containerID="e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354" Mar 19 09:45:13.741519 master-0 kubenswrapper[27819]: E0319 09:45:13.741481 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354\": container with ID starting with e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354 not found: ID does not exist" containerID="e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354" Mar 19 09:45:13.741614 master-0 kubenswrapper[27819]: I0319 09:45:13.741526 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354"} err="failed to get container status \"e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354\": rpc error: code = NotFound desc = could not find container \"e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354\": container with ID starting with e0b5834cc5d363e1a43eb576acfdb78ddf63424d3a4d2105fafb33c6198c8354 not found: ID does not exist" Mar 19 09:45:13.742284 master-0 kubenswrapper[27819]: I0319 09:45:13.742244 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6bcfd95575-qmxs7"] Mar 19 09:45:13.750889 master-0 kubenswrapper[27819]: I0319 09:45:13.750856 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6bcfd95575-qmxs7"] Mar 19 09:45:15.295569 master-0 kubenswrapper[27819]: I0319 09:45:15.295501 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306638cd-9ae4-48a3-a7d7-7e3b935df93f" path="/var/lib/kubelet/pods/306638cd-9ae4-48a3-a7d7-7e3b935df93f/volumes" Mar 19 09:45:18.628739 master-0 kubenswrapper[27819]: I0319 09:45:18.628655 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:18.628739 master-0 kubenswrapper[27819]: I0319 09:45:18.628708 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:18.655975 master-0 kubenswrapper[27819]: I0319 09:45:18.655918 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:18.800231 master-0 kubenswrapper[27819]: I0319 09:45:18.800119 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dnbrz" Mar 19 09:45:19.684874 master-0 kubenswrapper[27819]: I0319 09:45:19.684807 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6"] Mar 19 09:45:19.685600 master-0 kubenswrapper[27819]: E0319 09:45:19.685278 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306638cd-9ae4-48a3-a7d7-7e3b935df93f" containerName="console" Mar 19 09:45:19.685600 master-0 kubenswrapper[27819]: I0319 09:45:19.685295 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="306638cd-9ae4-48a3-a7d7-7e3b935df93f" containerName="console" Mar 19 09:45:19.685692 master-0 kubenswrapper[27819]: I0319 09:45:19.685604 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="306638cd-9ae4-48a3-a7d7-7e3b935df93f" containerName="console" Mar 19 09:45:19.687119 master-0 kubenswrapper[27819]: I0319 09:45:19.687088 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.717443 master-0 kubenswrapper[27819]: I0319 09:45:19.717385 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6"] Mar 19 09:45:19.788019 master-0 kubenswrapper[27819]: I0319 09:45:19.787948 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lrm7\" (UniqueName: \"kubernetes.io/projected/f998bb40-df3e-44e1-af52-d6089fa52f30-kube-api-access-8lrm7\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.788356 master-0 kubenswrapper[27819]: I0319 09:45:19.788337 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.788474 master-0 kubenswrapper[27819]: I0319 09:45:19.788459 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.890331 master-0 kubenswrapper[27819]: I0319 09:45:19.890254 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lrm7\" (UniqueName: \"kubernetes.io/projected/f998bb40-df3e-44e1-af52-d6089fa52f30-kube-api-access-8lrm7\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.890753 master-0 kubenswrapper[27819]: I0319 09:45:19.890726 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.890994 master-0 kubenswrapper[27819]: I0319 09:45:19.890967 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.891259 master-0 kubenswrapper[27819]: I0319 09:45:19.891212 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.891664 master-0 kubenswrapper[27819]: I0319 09:45:19.891625 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:19.907407 master-0 kubenswrapper[27819]: I0319 09:45:19.907347 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lrm7\" (UniqueName: \"kubernetes.io/projected/f998bb40-df3e-44e1-af52-d6089fa52f30-kube-api-access-8lrm7\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:20.007780 master-0 kubenswrapper[27819]: I0319 09:45:20.007514 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:20.453846 master-0 kubenswrapper[27819]: W0319 09:45:20.453781 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf998bb40_df3e_44e1_af52_d6089fa52f30.slice/crio-3abc54d477496558bed97cb1e45aedb883b1d683a910eb42dd18a58bf50166c1 WatchSource:0}: Error finding container 3abc54d477496558bed97cb1e45aedb883b1d683a910eb42dd18a58bf50166c1: Status 404 returned error can't find the container with id 3abc54d477496558bed97cb1e45aedb883b1d683a910eb42dd18a58bf50166c1 Mar 19 09:45:20.457932 master-0 kubenswrapper[27819]: I0319 09:45:20.457836 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6"] Mar 19 09:45:20.785775 master-0 kubenswrapper[27819]: I0319 09:45:20.785663 27819 generic.go:334] "Generic (PLEG): container finished" podID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerID="20e740414861108c47b7de07a6515cccbb6ab95cbb48da5d51a40b243ad4d1cc" exitCode=0 Mar 19 09:45:20.786330 master-0 kubenswrapper[27819]: I0319 09:45:20.785732 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" event={"ID":"f998bb40-df3e-44e1-af52-d6089fa52f30","Type":"ContainerDied","Data":"20e740414861108c47b7de07a6515cccbb6ab95cbb48da5d51a40b243ad4d1cc"} Mar 19 09:45:20.786330 master-0 kubenswrapper[27819]: I0319 09:45:20.785823 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" event={"ID":"f998bb40-df3e-44e1-af52-d6089fa52f30","Type":"ContainerStarted","Data":"3abc54d477496558bed97cb1e45aedb883b1d683a910eb42dd18a58bf50166c1"} Mar 19 09:45:21.795878 master-0 kubenswrapper[27819]: I0319 09:45:21.795758 27819 generic.go:334] "Generic (PLEG): container finished" podID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerID="a780fb9088284a2a217e093556a23ae0388604971b3188cab9ff6e4342612c2b" exitCode=0 Mar 19 09:45:21.795878 master-0 kubenswrapper[27819]: I0319 09:45:21.795803 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" event={"ID":"f998bb40-df3e-44e1-af52-d6089fa52f30","Type":"ContainerDied","Data":"a780fb9088284a2a217e093556a23ae0388604971b3188cab9ff6e4342612c2b"} Mar 19 09:45:22.805042 master-0 kubenswrapper[27819]: I0319 09:45:22.804970 27819 generic.go:334] "Generic (PLEG): container finished" podID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerID="24e6ad001979737b2d4b06f324356f743a851381016e8ad48051304f2cf9798f" exitCode=0 Mar 19 09:45:22.805042 master-0 kubenswrapper[27819]: I0319 09:45:22.805019 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" event={"ID":"f998bb40-df3e-44e1-af52-d6089fa52f30","Type":"ContainerDied","Data":"24e6ad001979737b2d4b06f324356f743a851381016e8ad48051304f2cf9798f"} Mar 19 09:45:24.150888 master-0 kubenswrapper[27819]: I0319 09:45:24.150323 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:24.262653 master-0 kubenswrapper[27819]: I0319 09:45:24.262443 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-util\") pod \"f998bb40-df3e-44e1-af52-d6089fa52f30\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " Mar 19 09:45:24.262653 master-0 kubenswrapper[27819]: I0319 09:45:24.262551 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-bundle\") pod \"f998bb40-df3e-44e1-af52-d6089fa52f30\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " Mar 19 09:45:24.262653 master-0 kubenswrapper[27819]: I0319 09:45:24.262615 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lrm7\" (UniqueName: \"kubernetes.io/projected/f998bb40-df3e-44e1-af52-d6089fa52f30-kube-api-access-8lrm7\") pod \"f998bb40-df3e-44e1-af52-d6089fa52f30\" (UID: \"f998bb40-df3e-44e1-af52-d6089fa52f30\") " Mar 19 09:45:24.263457 master-0 kubenswrapper[27819]: I0319 09:45:24.263395 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-bundle" (OuterVolumeSpecName: "bundle") pod "f998bb40-df3e-44e1-af52-d6089fa52f30" (UID: "f998bb40-df3e-44e1-af52-d6089fa52f30"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:45:24.265672 master-0 kubenswrapper[27819]: I0319 09:45:24.265618 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f998bb40-df3e-44e1-af52-d6089fa52f30-kube-api-access-8lrm7" (OuterVolumeSpecName: "kube-api-access-8lrm7") pod "f998bb40-df3e-44e1-af52-d6089fa52f30" (UID: "f998bb40-df3e-44e1-af52-d6089fa52f30"). InnerVolumeSpecName "kube-api-access-8lrm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:45:24.284908 master-0 kubenswrapper[27819]: I0319 09:45:24.284782 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-util" (OuterVolumeSpecName: "util") pod "f998bb40-df3e-44e1-af52-d6089fa52f30" (UID: "f998bb40-df3e-44e1-af52-d6089fa52f30"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:45:24.366021 master-0 kubenswrapper[27819]: I0319 09:45:24.365948 27819 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:24.366021 master-0 kubenswrapper[27819]: I0319 09:45:24.365991 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lrm7\" (UniqueName: \"kubernetes.io/projected/f998bb40-df3e-44e1-af52-d6089fa52f30-kube-api-access-8lrm7\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:24.366021 master-0 kubenswrapper[27819]: I0319 09:45:24.366004 27819 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f998bb40-df3e-44e1-af52-d6089fa52f30-util\") on node \"master-0\" DevicePath \"\"" Mar 19 09:45:24.828073 master-0 kubenswrapper[27819]: I0319 09:45:24.827945 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" event={"ID":"f998bb40-df3e-44e1-af52-d6089fa52f30","Type":"ContainerDied","Data":"3abc54d477496558bed97cb1e45aedb883b1d683a910eb42dd18a58bf50166c1"} Mar 19 09:45:24.828073 master-0 kubenswrapper[27819]: I0319 09:45:24.828033 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3abc54d477496558bed97cb1e45aedb883b1d683a910eb42dd18a58bf50166c1" Mar 19 09:45:24.828654 master-0 kubenswrapper[27819]: I0319 09:45:24.828071 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514c9w9b6" Mar 19 09:45:27.040035 master-0 kubenswrapper[27819]: I0319 09:45:27.039992 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7"] Mar 19 09:45:27.040811 master-0 kubenswrapper[27819]: E0319 09:45:27.040317 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerName="pull" Mar 19 09:45:27.040811 master-0 kubenswrapper[27819]: I0319 09:45:27.040331 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerName="pull" Mar 19 09:45:27.040811 master-0 kubenswrapper[27819]: E0319 09:45:27.040360 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerName="extract" Mar 19 09:45:27.040811 master-0 kubenswrapper[27819]: I0319 09:45:27.040367 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerName="extract" Mar 19 09:45:27.040811 master-0 kubenswrapper[27819]: E0319 09:45:27.040392 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerName="util" Mar 19 09:45:27.040811 master-0 kubenswrapper[27819]: I0319 09:45:27.040399 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerName="util" Mar 19 09:45:27.040811 master-0 kubenswrapper[27819]: I0319 09:45:27.040574 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="f998bb40-df3e-44e1-af52-d6089fa52f30" containerName="extract" Mar 19 09:45:27.041136 master-0 kubenswrapper[27819]: I0319 09:45:27.041048 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" Mar 19 09:45:27.098505 master-0 kubenswrapper[27819]: I0319 09:45:27.098447 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7"] Mar 19 09:45:27.123023 master-0 kubenswrapper[27819]: I0319 09:45:27.122949 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndppj\" (UniqueName: \"kubernetes.io/projected/52518669-1e0e-458f-a050-f04a0d3aaa30-kube-api-access-ndppj\") pod \"openstack-operator-controller-init-b85c4d696-z2tg7\" (UID: \"52518669-1e0e-458f-a050-f04a0d3aaa30\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" Mar 19 09:45:27.225040 master-0 kubenswrapper[27819]: I0319 09:45:27.224961 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndppj\" (UniqueName: \"kubernetes.io/projected/52518669-1e0e-458f-a050-f04a0d3aaa30-kube-api-access-ndppj\") pod \"openstack-operator-controller-init-b85c4d696-z2tg7\" (UID: \"52518669-1e0e-458f-a050-f04a0d3aaa30\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" Mar 19 09:45:27.242633 master-0 kubenswrapper[27819]: I0319 09:45:27.240771 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndppj\" (UniqueName: \"kubernetes.io/projected/52518669-1e0e-458f-a050-f04a0d3aaa30-kube-api-access-ndppj\") pod \"openstack-operator-controller-init-b85c4d696-z2tg7\" (UID: \"52518669-1e0e-458f-a050-f04a0d3aaa30\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" Mar 19 09:45:27.356601 master-0 kubenswrapper[27819]: I0319 09:45:27.356526 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" Mar 19 09:45:27.811918 master-0 kubenswrapper[27819]: I0319 09:45:27.811854 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7"] Mar 19 09:45:27.812177 master-0 kubenswrapper[27819]: W0319 09:45:27.811953 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52518669_1e0e_458f_a050_f04a0d3aaa30.slice/crio-04d070fb003f1cd72a22930a38d89ebc453910e1dc272621a93ed2f0f3ba9343 WatchSource:0}: Error finding container 04d070fb003f1cd72a22930a38d89ebc453910e1dc272621a93ed2f0f3ba9343: Status 404 returned error can't find the container with id 04d070fb003f1cd72a22930a38d89ebc453910e1dc272621a93ed2f0f3ba9343 Mar 19 09:45:27.866368 master-0 kubenswrapper[27819]: I0319 09:45:27.866267 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" event={"ID":"52518669-1e0e-458f-a050-f04a0d3aaa30","Type":"ContainerStarted","Data":"04d070fb003f1cd72a22930a38d89ebc453910e1dc272621a93ed2f0f3ba9343"} Mar 19 09:45:33.936299 master-0 kubenswrapper[27819]: I0319 09:45:33.936246 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" event={"ID":"52518669-1e0e-458f-a050-f04a0d3aaa30","Type":"ContainerStarted","Data":"9808ff07ac06ebefe947e15f75babe048428f7cbf305899cb008bc5485b86c00"} Mar 19 09:45:33.972996 master-0 kubenswrapper[27819]: I0319 09:45:33.972924 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" podStartSLOduration=2.750740625 podStartE2EDuration="7.972903619s" podCreationTimestamp="2026-03-19 09:45:26 +0000 UTC" firstStartedPulling="2026-03-19 09:45:27.815693845 +0000 UTC m=+712.737271537" lastFinishedPulling="2026-03-19 09:45:33.037856839 +0000 UTC m=+717.959434531" observedRunningTime="2026-03-19 09:45:33.967835806 +0000 UTC m=+718.889413518" watchObservedRunningTime="2026-03-19 09:45:33.972903619 +0000 UTC m=+718.894481311" Mar 19 09:45:34.947880 master-0 kubenswrapper[27819]: I0319 09:45:34.947819 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" Mar 19 09:45:47.361062 master-0 kubenswrapper[27819]: I0319 09:45:47.360978 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-z2tg7" Mar 19 09:46:07.839447 master-0 kubenswrapper[27819]: I0319 09:46:07.839277 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb"] Mar 19 09:46:07.840852 master-0 kubenswrapper[27819]: I0319 09:46:07.840825 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" Mar 19 09:46:07.913440 master-0 kubenswrapper[27819]: I0319 09:46:07.912615 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8"] Mar 19 09:46:07.914886 master-0 kubenswrapper[27819]: I0319 09:46:07.914848 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" Mar 19 09:46:07.941850 master-0 kubenswrapper[27819]: I0319 09:46:07.941764 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb"] Mar 19 09:46:07.958502 master-0 kubenswrapper[27819]: I0319 09:46:07.958454 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg"] Mar 19 09:46:07.961707 master-0 kubenswrapper[27819]: I0319 09:46:07.961670 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" Mar 19 09:46:07.970314 master-0 kubenswrapper[27819]: I0319 09:46:07.970227 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8"] Mar 19 09:46:07.991377 master-0 kubenswrapper[27819]: I0319 09:46:07.991305 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg"] Mar 19 09:46:08.016574 master-0 kubenswrapper[27819]: I0319 09:46:08.006440 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb"] Mar 19 09:46:08.016574 master-0 kubenswrapper[27819]: I0319 09:46:08.007471 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" Mar 19 09:46:08.016574 master-0 kubenswrapper[27819]: I0319 09:46:08.008375 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xfc5\" (UniqueName: \"kubernetes.io/projected/7aaa7fb9-ba38-49ae-8cca-6b699862813d-kube-api-access-4xfc5\") pod \"barbican-operator-controller-manager-59bc569d95-4pvtb\" (UID: \"7aaa7fb9-ba38-49ae-8cca-6b699862813d\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" Mar 19 09:46:08.016574 master-0 kubenswrapper[27819]: I0319 09:46:08.008497 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfjrf\" (UniqueName: \"kubernetes.io/projected/c066a62a-37a8-453c-9267-fe8a8b744076-kube-api-access-bfjrf\") pod \"cinder-operator-controller-manager-8d58dc466-j9jj8\" (UID: \"c066a62a-37a8-453c-9267-fe8a8b744076\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" Mar 19 09:46:08.077732 master-0 kubenswrapper[27819]: I0319 09:46:08.077010 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj"] Mar 19 09:46:08.088584 master-0 kubenswrapper[27819]: I0319 09:46:08.078834 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" Mar 19 09:46:08.100376 master-0 kubenswrapper[27819]: I0319 09:46:08.100332 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb"] Mar 19 09:46:08.111166 master-0 kubenswrapper[27819]: I0319 09:46:08.111116 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws9dt\" (UniqueName: \"kubernetes.io/projected/1b2e9b70-1291-457f-a686-22ac2ee57053-kube-api-access-ws9dt\") pod \"glance-operator-controller-manager-79df6bcc97-w2bpb\" (UID: \"1b2e9b70-1291-457f-a686-22ac2ee57053\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" Mar 19 09:46:08.111985 master-0 kubenswrapper[27819]: I0319 09:46:08.111956 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xfc5\" (UniqueName: \"kubernetes.io/projected/7aaa7fb9-ba38-49ae-8cca-6b699862813d-kube-api-access-4xfc5\") pod \"barbican-operator-controller-manager-59bc569d95-4pvtb\" (UID: \"7aaa7fb9-ba38-49ae-8cca-6b699862813d\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" Mar 19 09:46:08.112596 master-0 kubenswrapper[27819]: I0319 09:46:08.112572 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkzl\" (UniqueName: \"kubernetes.io/projected/7d2df03a-c295-40bc-a256-936fc90ffe7f-kube-api-access-9bkzl\") pod \"designate-operator-controller-manager-588d4d986b-h4zlg\" (UID: \"7d2df03a-c295-40bc-a256-936fc90ffe7f\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" Mar 19 09:46:08.113189 master-0 kubenswrapper[27819]: I0319 09:46:08.113162 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfjrf\" (UniqueName: \"kubernetes.io/projected/c066a62a-37a8-453c-9267-fe8a8b744076-kube-api-access-bfjrf\") pod \"cinder-operator-controller-manager-8d58dc466-j9jj8\" (UID: \"c066a62a-37a8-453c-9267-fe8a8b744076\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" Mar 19 09:46:08.136208 master-0 kubenswrapper[27819]: I0319 09:46:08.132645 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj"] Mar 19 09:46:08.180038 master-0 kubenswrapper[27819]: I0319 09:46:08.179988 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfjrf\" (UniqueName: \"kubernetes.io/projected/c066a62a-37a8-453c-9267-fe8a8b744076-kube-api-access-bfjrf\") pod \"cinder-operator-controller-manager-8d58dc466-j9jj8\" (UID: \"c066a62a-37a8-453c-9267-fe8a8b744076\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" Mar 19 09:46:08.182459 master-0 kubenswrapper[27819]: I0319 09:46:08.181889 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xfc5\" (UniqueName: \"kubernetes.io/projected/7aaa7fb9-ba38-49ae-8cca-6b699862813d-kube-api-access-4xfc5\") pod \"barbican-operator-controller-manager-59bc569d95-4pvtb\" (UID: \"7aaa7fb9-ba38-49ae-8cca-6b699862813d\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" Mar 19 09:46:08.203499 master-0 kubenswrapper[27819]: I0319 09:46:08.203449 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2"] Mar 19 09:46:08.207790 master-0 kubenswrapper[27819]: I0319 09:46:08.207744 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" Mar 19 09:46:08.223651 master-0 kubenswrapper[27819]: I0319 09:46:08.220793 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8b4\" (UniqueName: \"kubernetes.io/projected/2815df79-ea7a-4a2a-8701-417f5aa7a8a4-kube-api-access-dg8b4\") pod \"heat-operator-controller-manager-67dd5f86f5-zl6sj\" (UID: \"2815df79-ea7a-4a2a-8701-417f5aa7a8a4\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" Mar 19 09:46:08.223651 master-0 kubenswrapper[27819]: I0319 09:46:08.220871 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws9dt\" (UniqueName: \"kubernetes.io/projected/1b2e9b70-1291-457f-a686-22ac2ee57053-kube-api-access-ws9dt\") pod \"glance-operator-controller-manager-79df6bcc97-w2bpb\" (UID: \"1b2e9b70-1291-457f-a686-22ac2ee57053\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" Mar 19 09:46:08.223651 master-0 kubenswrapper[27819]: I0319 09:46:08.220900 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bkzl\" (UniqueName: \"kubernetes.io/projected/7d2df03a-c295-40bc-a256-936fc90ffe7f-kube-api-access-9bkzl\") pod \"designate-operator-controller-manager-588d4d986b-h4zlg\" (UID: \"7d2df03a-c295-40bc-a256-936fc90ffe7f\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" Mar 19 09:46:08.236071 master-0 kubenswrapper[27819]: I0319 09:46:08.226215 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2"] Mar 19 09:46:08.237179 master-0 kubenswrapper[27819]: I0319 09:46:08.236830 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh"] Mar 19 09:46:08.239296 master-0 kubenswrapper[27819]: I0319 09:46:08.238165 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:08.266435 master-0 kubenswrapper[27819]: I0319 09:46:08.252848 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 19 09:46:08.266435 master-0 kubenswrapper[27819]: I0319 09:46:08.253603 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" Mar 19 09:46:08.270171 master-0 kubenswrapper[27819]: I0319 09:46:08.270139 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh"] Mar 19 09:46:08.272208 master-0 kubenswrapper[27819]: I0319 09:46:08.272165 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bkzl\" (UniqueName: \"kubernetes.io/projected/7d2df03a-c295-40bc-a256-936fc90ffe7f-kube-api-access-9bkzl\") pod \"designate-operator-controller-manager-588d4d986b-h4zlg\" (UID: \"7d2df03a-c295-40bc-a256-936fc90ffe7f\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" Mar 19 09:46:08.274202 master-0 kubenswrapper[27819]: I0319 09:46:08.274156 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws9dt\" (UniqueName: \"kubernetes.io/projected/1b2e9b70-1291-457f-a686-22ac2ee57053-kube-api-access-ws9dt\") pod \"glance-operator-controller-manager-79df6bcc97-w2bpb\" (UID: \"1b2e9b70-1291-457f-a686-22ac2ee57053\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" Mar 19 09:46:08.289241 master-0 kubenswrapper[27819]: I0319 09:46:08.288688 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" Mar 19 09:46:08.299973 master-0 kubenswrapper[27819]: I0319 09:46:08.299923 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r"] Mar 19 09:46:08.326885 master-0 kubenswrapper[27819]: I0319 09:46:08.307029 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" Mar 19 09:46:08.326885 master-0 kubenswrapper[27819]: I0319 09:46:08.323224 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfvw8\" (UniqueName: \"kubernetes.io/projected/16c86233-8caa-4255-a0f0-f06ceb351389-kube-api-access-wfvw8\") pod \"horizon-operator-controller-manager-8464cc45fb-bzns2\" (UID: \"16c86233-8caa-4255-a0f0-f06ceb351389\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" Mar 19 09:46:08.326885 master-0 kubenswrapper[27819]: I0319 09:46:08.323365 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz5bk\" (UniqueName: \"kubernetes.io/projected/185fa92f-4bb6-4254-a759-8162f8fa8a11-kube-api-access-gz5bk\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:08.326885 master-0 kubenswrapper[27819]: I0319 09:46:08.323460 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg8b4\" (UniqueName: \"kubernetes.io/projected/2815df79-ea7a-4a2a-8701-417f5aa7a8a4-kube-api-access-dg8b4\") pod \"heat-operator-controller-manager-67dd5f86f5-zl6sj\" (UID: \"2815df79-ea7a-4a2a-8701-417f5aa7a8a4\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" Mar 19 09:46:08.326885 master-0 kubenswrapper[27819]: I0319 09:46:08.323530 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:08.339624 master-0 kubenswrapper[27819]: I0319 09:46:08.339013 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" Mar 19 09:46:08.387878 master-0 kubenswrapper[27819]: I0319 09:46:08.382932 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r"] Mar 19 09:46:08.406512 master-0 kubenswrapper[27819]: I0319 09:46:08.403312 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg8b4\" (UniqueName: \"kubernetes.io/projected/2815df79-ea7a-4a2a-8701-417f5aa7a8a4-kube-api-access-dg8b4\") pod \"heat-operator-controller-manager-67dd5f86f5-zl6sj\" (UID: \"2815df79-ea7a-4a2a-8701-417f5aa7a8a4\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" Mar 19 09:46:08.415971 master-0 kubenswrapper[27819]: I0319 09:46:08.410070 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj"] Mar 19 09:46:08.418188 master-0 kubenswrapper[27819]: I0319 09:46:08.416984 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" Mar 19 09:46:08.418767 master-0 kubenswrapper[27819]: I0319 09:46:08.418699 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj"] Mar 19 09:46:08.425862 master-0 kubenswrapper[27819]: I0319 09:46:08.425822 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-knjcc"] Mar 19 09:46:08.427283 master-0 kubenswrapper[27819]: I0319 09:46:08.427172 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nnn\" (UniqueName: \"kubernetes.io/projected/8903724a-0684-45e7-918d-3a4a90c33d8f-kube-api-access-c5nnn\") pod \"ironic-operator-controller-manager-6f787dddc9-g4k7r\" (UID: \"8903724a-0684-45e7-918d-3a4a90c33d8f\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" Mar 19 09:46:08.427283 master-0 kubenswrapper[27819]: I0319 09:46:08.427222 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:08.427283 master-0 kubenswrapper[27819]: I0319 09:46:08.427259 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfvw8\" (UniqueName: \"kubernetes.io/projected/16c86233-8caa-4255-a0f0-f06ceb351389-kube-api-access-wfvw8\") pod \"horizon-operator-controller-manager-8464cc45fb-bzns2\" (UID: \"16c86233-8caa-4255-a0f0-f06ceb351389\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" Mar 19 09:46:08.427418 master-0 kubenswrapper[27819]: I0319 09:46:08.427319 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz5bk\" (UniqueName: \"kubernetes.io/projected/185fa92f-4bb6-4254-a759-8162f8fa8a11-kube-api-access-gz5bk\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:08.428008 master-0 kubenswrapper[27819]: E0319 09:46:08.427831 27819 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:08.428008 master-0 kubenswrapper[27819]: E0319 09:46:08.427878 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert podName:185fa92f-4bb6-4254-a759-8162f8fa8a11 nodeName:}" failed. No retries permitted until 2026-03-19 09:46:08.927859285 +0000 UTC m=+753.849436977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert") pod "infra-operator-controller-manager-7dd6bb94c9-2lnjh" (UID: "185fa92f-4bb6-4254-a759-8162f8fa8a11") : secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:08.429181 master-0 kubenswrapper[27819]: I0319 09:46:08.429148 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" Mar 19 09:46:08.431273 master-0 kubenswrapper[27819]: I0319 09:46:08.431241 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" Mar 19 09:46:08.467820 master-0 kubenswrapper[27819]: I0319 09:46:08.458916 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" Mar 19 09:46:08.612648 master-0 kubenswrapper[27819]: I0319 09:46:08.593791 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-knjcc"] Mar 19 09:46:08.635571 master-0 kubenswrapper[27819]: I0319 09:46:08.613941 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnkw\" (UniqueName: \"kubernetes.io/projected/2ec041b9-4f3a-472c-9410-5c14c32990fc-kube-api-access-cjnkw\") pod \"keystone-operator-controller-manager-768b96df4c-wljbj\" (UID: \"2ec041b9-4f3a-472c-9410-5c14c32990fc\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" Mar 19 09:46:08.635571 master-0 kubenswrapper[27819]: I0319 09:46:08.614071 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj6ft\" (UniqueName: \"kubernetes.io/projected/7f313579-c939-4189-be09-93430e848d9f-kube-api-access-jj6ft\") pod \"manila-operator-controller-manager-55f864c847-knjcc\" (UID: \"7f313579-c939-4189-be09-93430e848d9f\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" Mar 19 09:46:08.635571 master-0 kubenswrapper[27819]: I0319 09:46:08.614210 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nnn\" (UniqueName: \"kubernetes.io/projected/8903724a-0684-45e7-918d-3a4a90c33d8f-kube-api-access-c5nnn\") pod \"ironic-operator-controller-manager-6f787dddc9-g4k7r\" (UID: \"8903724a-0684-45e7-918d-3a4a90c33d8f\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" Mar 19 09:46:08.741213 master-0 kubenswrapper[27819]: I0319 09:46:08.730564 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjnkw\" (UniqueName: \"kubernetes.io/projected/2ec041b9-4f3a-472c-9410-5c14c32990fc-kube-api-access-cjnkw\") pod \"keystone-operator-controller-manager-768b96df4c-wljbj\" (UID: \"2ec041b9-4f3a-472c-9410-5c14c32990fc\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" Mar 19 09:46:08.741213 master-0 kubenswrapper[27819]: I0319 09:46:08.730646 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj6ft\" (UniqueName: \"kubernetes.io/projected/7f313579-c939-4189-be09-93430e848d9f-kube-api-access-jj6ft\") pod \"manila-operator-controller-manager-55f864c847-knjcc\" (UID: \"7f313579-c939-4189-be09-93430e848d9f\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" Mar 19 09:46:08.741213 master-0 kubenswrapper[27819]: I0319 09:46:08.739783 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfvw8\" (UniqueName: \"kubernetes.io/projected/16c86233-8caa-4255-a0f0-f06ceb351389-kube-api-access-wfvw8\") pod \"horizon-operator-controller-manager-8464cc45fb-bzns2\" (UID: \"16c86233-8caa-4255-a0f0-f06ceb351389\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" Mar 19 09:46:08.758097 master-0 kubenswrapper[27819]: I0319 09:46:08.752949 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz5bk\" (UniqueName: \"kubernetes.io/projected/185fa92f-4bb6-4254-a759-8162f8fa8a11-kube-api-access-gz5bk\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:08.778318 master-0 kubenswrapper[27819]: I0319 09:46:08.778269 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nnn\" (UniqueName: \"kubernetes.io/projected/8903724a-0684-45e7-918d-3a4a90c33d8f-kube-api-access-c5nnn\") pod \"ironic-operator-controller-manager-6f787dddc9-g4k7r\" (UID: \"8903724a-0684-45e7-918d-3a4a90c33d8f\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" Mar 19 09:46:08.784912 master-0 kubenswrapper[27819]: I0319 09:46:08.784882 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w"] Mar 19 09:46:08.786215 master-0 kubenswrapper[27819]: I0319 09:46:08.786164 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj6ft\" (UniqueName: \"kubernetes.io/projected/7f313579-c939-4189-be09-93430e848d9f-kube-api-access-jj6ft\") pod \"manila-operator-controller-manager-55f864c847-knjcc\" (UID: \"7f313579-c939-4189-be09-93430e848d9f\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" Mar 19 09:46:08.795420 master-0 kubenswrapper[27819]: I0319 09:46:08.795366 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" Mar 19 09:46:08.797952 master-0 kubenswrapper[27819]: I0319 09:46:08.797524 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjnkw\" (UniqueName: \"kubernetes.io/projected/2ec041b9-4f3a-472c-9410-5c14c32990fc-kube-api-access-cjnkw\") pod \"keystone-operator-controller-manager-768b96df4c-wljbj\" (UID: \"2ec041b9-4f3a-472c-9410-5c14c32990fc\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" Mar 19 09:46:08.798935 master-0 kubenswrapper[27819]: I0319 09:46:08.798883 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj"] Mar 19 09:46:08.800783 master-0 kubenswrapper[27819]: I0319 09:46:08.800754 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" Mar 19 09:46:08.809847 master-0 kubenswrapper[27819]: I0319 09:46:08.809794 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w"] Mar 19 09:46:08.810159 master-0 kubenswrapper[27819]: I0319 09:46:08.810137 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" Mar 19 09:46:08.821832 master-0 kubenswrapper[27819]: I0319 09:46:08.821780 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj"] Mar 19 09:46:08.882896 master-0 kubenswrapper[27819]: I0319 09:46:08.881220 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" Mar 19 09:46:08.893595 master-0 kubenswrapper[27819]: I0319 09:46:08.893256 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7"] Mar 19 09:46:08.912630 master-0 kubenswrapper[27819]: I0319 09:46:08.912593 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7"] Mar 19 09:46:08.912753 master-0 kubenswrapper[27819]: I0319 09:46:08.912701 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" Mar 19 09:46:08.936620 master-0 kubenswrapper[27819]: I0319 09:46:08.936281 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnn8\" (UniqueName: \"kubernetes.io/projected/d0c53c39-36fd-4e38-8031-e22003748723-kube-api-access-lxnn8\") pod \"mariadb-operator-controller-manager-67ccfc9778-zpr6w\" (UID: \"d0c53c39-36fd-4e38-8031-e22003748723\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" Mar 19 09:46:08.942312 master-0 kubenswrapper[27819]: I0319 09:46:08.936900 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:08.942312 master-0 kubenswrapper[27819]: I0319 09:46:08.937000 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxtb2\" (UniqueName: \"kubernetes.io/projected/17e90888-d0f7-447c-965a-ed826e5668b5-kube-api-access-rxtb2\") pod \"neutron-operator-controller-manager-767865f676-hm9pj\" (UID: \"17e90888-d0f7-447c-965a-ed826e5668b5\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" Mar 19 09:46:08.942312 master-0 kubenswrapper[27819]: E0319 09:46:08.937272 27819 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:08.942312 master-0 kubenswrapper[27819]: E0319 09:46:08.937330 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert podName:185fa92f-4bb6-4254-a759-8162f8fa8a11 nodeName:}" failed. No retries permitted until 2026-03-19 09:46:09.937307928 +0000 UTC m=+754.858885620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert") pod "infra-operator-controller-manager-7dd6bb94c9-2lnjh" (UID: "185fa92f-4bb6-4254-a759-8162f8fa8a11") : secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:08.942312 master-0 kubenswrapper[27819]: I0319 09:46:08.941973 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx"] Mar 19 09:46:08.943495 master-0 kubenswrapper[27819]: I0319 09:46:08.943383 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" Mar 19 09:46:08.963723 master-0 kubenswrapper[27819]: I0319 09:46:08.963668 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx"] Mar 19 09:46:08.968438 master-0 kubenswrapper[27819]: I0319 09:46:08.968401 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg"] Mar 19 09:46:09.022288 master-0 kubenswrapper[27819]: I0319 09:46:09.013690 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" Mar 19 09:46:09.030001 master-0 kubenswrapper[27819]: I0319 09:46:09.027878 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn"] Mar 19 09:46:09.030001 master-0 kubenswrapper[27819]: I0319 09:46:09.029212 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:09.036948 master-0 kubenswrapper[27819]: I0319 09:46:09.034690 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 19 09:46:09.042429 master-0 kubenswrapper[27819]: I0319 09:46:09.040445 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq825\" (UniqueName: \"kubernetes.io/projected/4a786353-9349-45a7-a7b2-c506296f2525-kube-api-access-dq825\") pod \"octavia-operator-controller-manager-5b9f45d989-bmnbx\" (UID: \"4a786353-9349-45a7-a7b2-c506296f2525\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" Mar 19 09:46:09.042429 master-0 kubenswrapper[27819]: I0319 09:46:09.040496 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7qf\" (UniqueName: \"kubernetes.io/projected/8feb876d-d28d-45ba-8bae-02cf56732e7b-kube-api-access-7n7qf\") pod \"nova-operator-controller-manager-5d488d59fb-rcdb7\" (UID: \"8feb876d-d28d-45ba-8bae-02cf56732e7b\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" Mar 19 09:46:09.042429 master-0 kubenswrapper[27819]: I0319 09:46:09.040607 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxtb2\" (UniqueName: \"kubernetes.io/projected/17e90888-d0f7-447c-965a-ed826e5668b5-kube-api-access-rxtb2\") pod \"neutron-operator-controller-manager-767865f676-hm9pj\" (UID: \"17e90888-d0f7-447c-965a-ed826e5668b5\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" Mar 19 09:46:09.042429 master-0 kubenswrapper[27819]: I0319 09:46:09.040693 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxnn8\" (UniqueName: \"kubernetes.io/projected/d0c53c39-36fd-4e38-8031-e22003748723-kube-api-access-lxnn8\") pod \"mariadb-operator-controller-manager-67ccfc9778-zpr6w\" (UID: \"d0c53c39-36fd-4e38-8031-e22003748723\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" Mar 19 09:46:09.050814 master-0 kubenswrapper[27819]: I0319 09:46:09.049646 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8"] Mar 19 09:46:09.050814 master-0 kubenswrapper[27819]: I0319 09:46:09.050058 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" Mar 19 09:46:09.098213 master-0 kubenswrapper[27819]: I0319 09:46:09.098082 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxtb2\" (UniqueName: \"kubernetes.io/projected/17e90888-d0f7-447c-965a-ed826e5668b5-kube-api-access-rxtb2\") pod \"neutron-operator-controller-manager-767865f676-hm9pj\" (UID: \"17e90888-d0f7-447c-965a-ed826e5668b5\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" Mar 19 09:46:09.098213 master-0 kubenswrapper[27819]: I0319 09:46:09.098465 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-czvjz"] Mar 19 09:46:09.108332 master-0 kubenswrapper[27819]: I0319 09:46:09.102520 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" Mar 19 09:46:09.115277 master-0 kubenswrapper[27819]: I0319 09:46:09.114817 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxnn8\" (UniqueName: \"kubernetes.io/projected/d0c53c39-36fd-4e38-8031-e22003748723-kube-api-access-lxnn8\") pod \"mariadb-operator-controller-manager-67ccfc9778-zpr6w\" (UID: \"d0c53c39-36fd-4e38-8031-e22003748723\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" Mar 19 09:46:09.148315 master-0 kubenswrapper[27819]: I0319 09:46:09.147165 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzsl5\" (UniqueName: \"kubernetes.io/projected/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-kube-api-access-tzsl5\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:09.148315 master-0 kubenswrapper[27819]: I0319 09:46:09.147344 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq825\" (UniqueName: \"kubernetes.io/projected/4a786353-9349-45a7-a7b2-c506296f2525-kube-api-access-dq825\") pod \"octavia-operator-controller-manager-5b9f45d989-bmnbx\" (UID: \"4a786353-9349-45a7-a7b2-c506296f2525\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" Mar 19 09:46:09.148315 master-0 kubenswrapper[27819]: I0319 09:46:09.147377 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:09.148315 master-0 kubenswrapper[27819]: I0319 09:46:09.147404 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7qf\" (UniqueName: \"kubernetes.io/projected/8feb876d-d28d-45ba-8bae-02cf56732e7b-kube-api-access-7n7qf\") pod \"nova-operator-controller-manager-5d488d59fb-rcdb7\" (UID: \"8feb876d-d28d-45ba-8bae-02cf56732e7b\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" Mar 19 09:46:09.152401 master-0 kubenswrapper[27819]: I0319 09:46:09.152361 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx"] Mar 19 09:46:09.160530 master-0 kubenswrapper[27819]: I0319 09:46:09.159408 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" Mar 19 09:46:09.164965 master-0 kubenswrapper[27819]: I0319 09:46:09.161399 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn"] Mar 19 09:46:09.190997 master-0 kubenswrapper[27819]: I0319 09:46:09.190314 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-czvjz"] Mar 19 09:46:09.194004 master-0 kubenswrapper[27819]: I0319 09:46:09.193954 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx"] Mar 19 09:46:09.196373 master-0 kubenswrapper[27819]: I0319 09:46:09.196344 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" Mar 19 09:46:09.223813 master-0 kubenswrapper[27819]: I0319 09:46:09.223774 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq825\" (UniqueName: \"kubernetes.io/projected/4a786353-9349-45a7-a7b2-c506296f2525-kube-api-access-dq825\") pod \"octavia-operator-controller-manager-5b9f45d989-bmnbx\" (UID: \"4a786353-9349-45a7-a7b2-c506296f2525\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" Mar 19 09:46:09.232935 master-0 kubenswrapper[27819]: I0319 09:46:09.232894 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7qf\" (UniqueName: \"kubernetes.io/projected/8feb876d-d28d-45ba-8bae-02cf56732e7b-kube-api-access-7n7qf\") pod \"nova-operator-controller-manager-5d488d59fb-rcdb7\" (UID: \"8feb876d-d28d-45ba-8bae-02cf56732e7b\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" Mar 19 09:46:09.253060 master-0 kubenswrapper[27819]: I0319 09:46:09.252985 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzsl5\" (UniqueName: \"kubernetes.io/projected/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-kube-api-access-tzsl5\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:09.253233 master-0 kubenswrapper[27819]: I0319 09:46:09.253078 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7hb\" (UniqueName: \"kubernetes.io/projected/914356dd-4ccd-4818-aeaf-16f32fc9a249-kube-api-access-jk7hb\") pod \"ovn-operator-controller-manager-884679f54-czvjz\" (UID: \"914356dd-4ccd-4818-aeaf-16f32fc9a249\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" Mar 19 09:46:09.253233 master-0 kubenswrapper[27819]: I0319 09:46:09.253215 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:09.253307 master-0 kubenswrapper[27819]: I0319 09:46:09.253280 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fhp\" (UniqueName: \"kubernetes.io/projected/81ebb272-c362-4900-91ef-ccd4e48aaec4-kube-api-access-l5fhp\") pod \"placement-operator-controller-manager-5784578c99-8j8rx\" (UID: \"81ebb272-c362-4900-91ef-ccd4e48aaec4\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" Mar 19 09:46:09.253826 master-0 kubenswrapper[27819]: E0319 09:46:09.253804 27819 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:09.253881 master-0 kubenswrapper[27819]: E0319 09:46:09.253857 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert podName:86af73bb-e6b4-4efd-a66c-8cb32bf3d02d nodeName:}" failed. No retries permitted until 2026-03-19 09:46:09.753842365 +0000 UTC m=+754.675420047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899dj6bn" (UID: "86af73bb-e6b4-4efd-a66c-8cb32bf3d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:09.294416 master-0 kubenswrapper[27819]: I0319 09:46:09.294366 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" Mar 19 09:46:09.310346 master-0 kubenswrapper[27819]: I0319 09:46:09.310305 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzsl5\" (UniqueName: \"kubernetes.io/projected/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-kube-api-access-tzsl5\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:09.358037 master-0 kubenswrapper[27819]: I0319 09:46:09.354702 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fhp\" (UniqueName: \"kubernetes.io/projected/81ebb272-c362-4900-91ef-ccd4e48aaec4-kube-api-access-l5fhp\") pod \"placement-operator-controller-manager-5784578c99-8j8rx\" (UID: \"81ebb272-c362-4900-91ef-ccd4e48aaec4\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" Mar 19 09:46:09.358037 master-0 kubenswrapper[27819]: I0319 09:46:09.354805 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk7hb\" (UniqueName: \"kubernetes.io/projected/914356dd-4ccd-4818-aeaf-16f32fc9a249-kube-api-access-jk7hb\") pod \"ovn-operator-controller-manager-884679f54-czvjz\" (UID: \"914356dd-4ccd-4818-aeaf-16f32fc9a249\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" Mar 19 09:46:09.358037 master-0 kubenswrapper[27819]: I0319 09:46:09.354874 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" Mar 19 09:46:09.394766 master-0 kubenswrapper[27819]: I0319 09:46:09.394668 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" Mar 19 09:46:09.408985 master-0 kubenswrapper[27819]: I0319 09:46:09.408940 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-j87zk"] Mar 19 09:46:09.417638 master-0 kubenswrapper[27819]: I0319 09:46:09.417349 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" event={"ID":"7d2df03a-c295-40bc-a256-936fc90ffe7f","Type":"ContainerStarted","Data":"0fc9751349c61089a721e8ceee37519e2a77f4f370ca18f229f58b45a98560ca"} Mar 19 09:46:09.417638 master-0 kubenswrapper[27819]: I0319 09:46:09.417503 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" event={"ID":"c066a62a-37a8-453c-9267-fe8a8b744076","Type":"ContainerStarted","Data":"680ea1c373fffde862dd187a20e82a8d71dd89b7b9cb58a6bcb5e825ffec7eba"} Mar 19 09:46:09.417871 master-0 kubenswrapper[27819]: I0319 09:46:09.417590 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-j87zk"] Mar 19 09:46:09.417871 master-0 kubenswrapper[27819]: I0319 09:46:09.417717 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld"] Mar 19 09:46:09.419717 master-0 kubenswrapper[27819]: I0319 09:46:09.419687 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" Mar 19 09:46:09.420836 master-0 kubenswrapper[27819]: I0319 09:46:09.420800 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg"] Mar 19 09:46:09.422614 master-0 kubenswrapper[27819]: I0319 09:46:09.422593 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg"] Mar 19 09:46:09.422809 master-0 kubenswrapper[27819]: I0319 09:46:09.422751 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" Mar 19 09:46:09.422981 master-0 kubenswrapper[27819]: I0319 09:46:09.420896 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" Mar 19 09:46:09.430242 master-0 kubenswrapper[27819]: I0319 09:46:09.430191 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fhp\" (UniqueName: \"kubernetes.io/projected/81ebb272-c362-4900-91ef-ccd4e48aaec4-kube-api-access-l5fhp\") pod \"placement-operator-controller-manager-5784578c99-8j8rx\" (UID: \"81ebb272-c362-4900-91ef-ccd4e48aaec4\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" Mar 19 09:46:09.431406 master-0 kubenswrapper[27819]: I0319 09:46:09.431387 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk7hb\" (UniqueName: \"kubernetes.io/projected/914356dd-4ccd-4818-aeaf-16f32fc9a249-kube-api-access-jk7hb\") pod \"ovn-operator-controller-manager-884679f54-czvjz\" (UID: \"914356dd-4ccd-4818-aeaf-16f32fc9a249\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" Mar 19 09:46:09.439310 master-0 kubenswrapper[27819]: I0319 09:46:09.439231 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld"] Mar 19 09:46:09.446378 master-0 kubenswrapper[27819]: I0319 09:46:09.446339 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29"] Mar 19 09:46:09.447633 master-0 kubenswrapper[27819]: I0319 09:46:09.447608 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" Mar 19 09:46:09.464843 master-0 kubenswrapper[27819]: I0319 09:46:09.464800 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29"] Mar 19 09:46:09.475571 master-0 kubenswrapper[27819]: I0319 09:46:09.475500 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj"] Mar 19 09:46:09.509614 master-0 kubenswrapper[27819]: I0319 09:46:09.509565 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" Mar 19 09:46:09.552217 master-0 kubenswrapper[27819]: I0319 09:46:09.552176 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" Mar 19 09:46:09.562405 master-0 kubenswrapper[27819]: I0319 09:46:09.562315 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx4h\" (UniqueName: \"kubernetes.io/projected/3579c069-58e4-4563-8f6f-67bf9950d04f-kube-api-access-wbx4h\") pod \"swift-operator-controller-manager-c674c5965-j87zk\" (UID: \"3579c069-58e4-4563-8f6f-67bf9950d04f\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" Mar 19 09:46:09.562499 master-0 kubenswrapper[27819]: I0319 09:46:09.562419 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnfh4\" (UniqueName: \"kubernetes.io/projected/121a3aeb-64fc-43d2-8ba2-e88fe9349525-kube-api-access-pnfh4\") pod \"watcher-operator-controller-manager-6c4d75f7f9-8pg29\" (UID: \"121a3aeb-64fc-43d2-8ba2-e88fe9349525\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" Mar 19 09:46:09.562709 master-0 kubenswrapper[27819]: I0319 09:46:09.562631 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24hrm\" (UniqueName: \"kubernetes.io/projected/42499139-f602-4061-97c5-e20d54d0731b-kube-api-access-24hrm\") pod \"telemetry-operator-controller-manager-d6b694c5-jmprg\" (UID: \"42499139-f602-4061-97c5-e20d54d0731b\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" Mar 19 09:46:09.563415 master-0 kubenswrapper[27819]: I0319 09:46:09.563277 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm85l\" (UniqueName: \"kubernetes.io/projected/11871ed4-c83d-4b10-bf10-1507c85a3581-kube-api-access-fm85l\") pod \"test-operator-controller-manager-5c5cb9c4d7-d7wld\" (UID: \"11871ed4-c83d-4b10-bf10-1507c85a3581\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" Mar 19 09:46:09.586796 master-0 kubenswrapper[27819]: I0319 09:46:09.582942 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb"] Mar 19 09:46:09.690666 master-0 kubenswrapper[27819]: I0319 09:46:09.667312 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbx4h\" (UniqueName: \"kubernetes.io/projected/3579c069-58e4-4563-8f6f-67bf9950d04f-kube-api-access-wbx4h\") pod \"swift-operator-controller-manager-c674c5965-j87zk\" (UID: \"3579c069-58e4-4563-8f6f-67bf9950d04f\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" Mar 19 09:46:09.690666 master-0 kubenswrapper[27819]: I0319 09:46:09.690605 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnfh4\" (UniqueName: \"kubernetes.io/projected/121a3aeb-64fc-43d2-8ba2-e88fe9349525-kube-api-access-pnfh4\") pod \"watcher-operator-controller-manager-6c4d75f7f9-8pg29\" (UID: \"121a3aeb-64fc-43d2-8ba2-e88fe9349525\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" Mar 19 09:46:09.690771 master-0 kubenswrapper[27819]: I0319 09:46:09.690694 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24hrm\" (UniqueName: \"kubernetes.io/projected/42499139-f602-4061-97c5-e20d54d0731b-kube-api-access-24hrm\") pod \"telemetry-operator-controller-manager-d6b694c5-jmprg\" (UID: \"42499139-f602-4061-97c5-e20d54d0731b\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" Mar 19 09:46:09.691570 master-0 kubenswrapper[27819]: I0319 09:46:09.691165 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm85l\" (UniqueName: \"kubernetes.io/projected/11871ed4-c83d-4b10-bf10-1507c85a3581-kube-api-access-fm85l\") pod \"test-operator-controller-manager-5c5cb9c4d7-d7wld\" (UID: \"11871ed4-c83d-4b10-bf10-1507c85a3581\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" Mar 19 09:46:09.786377 master-0 kubenswrapper[27819]: I0319 09:46:09.785507 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnfh4\" (UniqueName: \"kubernetes.io/projected/121a3aeb-64fc-43d2-8ba2-e88fe9349525-kube-api-access-pnfh4\") pod \"watcher-operator-controller-manager-6c4d75f7f9-8pg29\" (UID: \"121a3aeb-64fc-43d2-8ba2-e88fe9349525\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" Mar 19 09:46:09.793865 master-0 kubenswrapper[27819]: I0319 09:46:09.793408 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24hrm\" (UniqueName: \"kubernetes.io/projected/42499139-f602-4061-97c5-e20d54d0731b-kube-api-access-24hrm\") pod \"telemetry-operator-controller-manager-d6b694c5-jmprg\" (UID: \"42499139-f602-4061-97c5-e20d54d0731b\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" Mar 19 09:46:09.793865 master-0 kubenswrapper[27819]: I0319 09:46:09.793450 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:09.793865 master-0 kubenswrapper[27819]: E0319 09:46:09.793555 27819 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:09.793865 master-0 kubenswrapper[27819]: E0319 09:46:09.793597 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert podName:86af73bb-e6b4-4efd-a66c-8cb32bf3d02d nodeName:}" failed. No retries permitted until 2026-03-19 09:46:10.793584018 +0000 UTC m=+755.715161700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899dj6bn" (UID: "86af73bb-e6b4-4efd-a66c-8cb32bf3d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:09.793865 master-0 kubenswrapper[27819]: I0319 09:46:09.793791 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbx4h\" (UniqueName: \"kubernetes.io/projected/3579c069-58e4-4563-8f6f-67bf9950d04f-kube-api-access-wbx4h\") pod \"swift-operator-controller-manager-c674c5965-j87zk\" (UID: \"3579c069-58e4-4563-8f6f-67bf9950d04f\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" Mar 19 09:46:09.813501 master-0 kubenswrapper[27819]: I0319 09:46:09.810903 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm85l\" (UniqueName: \"kubernetes.io/projected/11871ed4-c83d-4b10-bf10-1507c85a3581-kube-api-access-fm85l\") pod \"test-operator-controller-manager-5c5cb9c4d7-d7wld\" (UID: \"11871ed4-c83d-4b10-bf10-1507c85a3581\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" Mar 19 09:46:09.832996 master-0 kubenswrapper[27819]: I0319 09:46:09.831999 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2"] Mar 19 09:46:09.836713 master-0 kubenswrapper[27819]: I0319 09:46:09.836608 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:09.844436 master-0 kubenswrapper[27819]: I0319 09:46:09.844374 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 19 09:46:09.846853 master-0 kubenswrapper[27819]: I0319 09:46:09.844569 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 19 09:46:09.848857 master-0 kubenswrapper[27819]: I0319 09:46:09.848792 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj"] Mar 19 09:46:09.871818 master-0 kubenswrapper[27819]: I0319 09:46:09.870156 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" Mar 19 09:46:09.875860 master-0 kubenswrapper[27819]: I0319 09:46:09.875600 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2"] Mar 19 09:46:09.923494 master-0 kubenswrapper[27819]: I0319 09:46:09.919522 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-knjcc"] Mar 19 09:46:09.931574 master-0 kubenswrapper[27819]: W0319 09:46:09.930985 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f313579_c939_4189_be09_93430e848d9f.slice/crio-0c19dd224cbeafb10c4b31526f2910081e62a9cb14aff96bd6b88da65079e1f3 WatchSource:0}: Error finding container 0c19dd224cbeafb10c4b31526f2910081e62a9cb14aff96bd6b88da65079e1f3: Status 404 returned error can't find the container with id 0c19dd224cbeafb10c4b31526f2910081e62a9cb14aff96bd6b88da65079e1f3 Mar 19 09:46:09.954967 master-0 kubenswrapper[27819]: I0319 09:46:09.954923 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" Mar 19 09:46:09.987971 master-0 kubenswrapper[27819]: I0319 09:46:09.987231 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" Mar 19 09:46:09.994553 master-0 kubenswrapper[27819]: I0319 09:46:09.994473 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc"] Mar 19 09:46:10.000643 master-0 kubenswrapper[27819]: I0319 09:46:10.000602 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" Mar 19 09:46:10.002294 master-0 kubenswrapper[27819]: I0319 09:46:10.002266 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.002380 master-0 kubenswrapper[27819]: I0319 09:46:10.002361 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87tc9\" (UniqueName: \"kubernetes.io/projected/8dca8916-c8e2-4a1c-a348-bf827c760abd-kube-api-access-87tc9\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.002696 master-0 kubenswrapper[27819]: I0319 09:46:10.002421 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.002696 master-0 kubenswrapper[27819]: I0319 09:46:10.002457 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:10.002696 master-0 kubenswrapper[27819]: E0319 09:46:10.002586 27819 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:10.002696 master-0 kubenswrapper[27819]: E0319 09:46:10.002623 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert podName:185fa92f-4bb6-4254-a759-8162f8fa8a11 nodeName:}" failed. No retries permitted until 2026-03-19 09:46:12.002612038 +0000 UTC m=+756.924189730 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert") pod "infra-operator-controller-manager-7dd6bb94c9-2lnjh" (UID: "185fa92f-4bb6-4254-a759-8162f8fa8a11") : secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:10.012271 master-0 kubenswrapper[27819]: I0319 09:46:10.002945 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" Mar 19 09:46:10.035828 master-0 kubenswrapper[27819]: I0319 09:46:10.035589 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb"] Mar 19 09:46:10.045653 master-0 kubenswrapper[27819]: I0319 09:46:10.045126 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc"] Mar 19 09:46:10.104353 master-0 kubenswrapper[27819]: I0319 09:46:10.104295 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87tc9\" (UniqueName: \"kubernetes.io/projected/8dca8916-c8e2-4a1c-a348-bf827c760abd-kube-api-access-87tc9\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.104563 master-0 kubenswrapper[27819]: I0319 09:46:10.104387 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszqh\" (UniqueName: \"kubernetes.io/projected/4283cfcf-3551-4738-8394-45bd27e4e8fa-kube-api-access-kszqh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5nvqc\" (UID: \"4283cfcf-3551-4738-8394-45bd27e4e8fa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" Mar 19 09:46:10.104563 master-0 kubenswrapper[27819]: I0319 09:46:10.104410 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.104563 master-0 kubenswrapper[27819]: I0319 09:46:10.104470 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.104710 master-0 kubenswrapper[27819]: E0319 09:46:10.104629 27819 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 09:46:10.104710 master-0 kubenswrapper[27819]: E0319 09:46:10.104676 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:10.604661942 +0000 UTC m=+755.526239634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "metrics-server-cert" not found Mar 19 09:46:10.105099 master-0 kubenswrapper[27819]: E0319 09:46:10.105070 27819 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 09:46:10.105145 master-0 kubenswrapper[27819]: E0319 09:46:10.105104 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:10.605093511 +0000 UTC m=+755.526671203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "webhook-server-cert" not found Mar 19 09:46:10.123330 master-0 kubenswrapper[27819]: I0319 09:46:10.122966 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87tc9\" (UniqueName: \"kubernetes.io/projected/8dca8916-c8e2-4a1c-a348-bf827c760abd-kube-api-access-87tc9\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.207100 master-0 kubenswrapper[27819]: I0319 09:46:10.207019 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszqh\" (UniqueName: \"kubernetes.io/projected/4283cfcf-3551-4738-8394-45bd27e4e8fa-kube-api-access-kszqh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5nvqc\" (UID: \"4283cfcf-3551-4738-8394-45bd27e4e8fa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" Mar 19 09:46:10.232403 master-0 kubenswrapper[27819]: I0319 09:46:10.231498 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszqh\" (UniqueName: \"kubernetes.io/projected/4283cfcf-3551-4738-8394-45bd27e4e8fa-kube-api-access-kszqh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5nvqc\" (UID: \"4283cfcf-3551-4738-8394-45bd27e4e8fa\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" Mar 19 09:46:10.356412 master-0 kubenswrapper[27819]: I0319 09:46:10.356110 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" Mar 19 09:46:10.418879 master-0 kubenswrapper[27819]: I0319 09:46:10.418836 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" event={"ID":"7f313579-c939-4189-be09-93430e848d9f","Type":"ContainerStarted","Data":"0c19dd224cbeafb10c4b31526f2910081e62a9cb14aff96bd6b88da65079e1f3"} Mar 19 09:46:10.426753 master-0 kubenswrapper[27819]: I0319 09:46:10.426691 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" event={"ID":"1b2e9b70-1291-457f-a686-22ac2ee57053","Type":"ContainerStarted","Data":"ccc564392996c4c86c3dbdaf314da66ea5d3ea3febbdd35ff3618f415f1c853f"} Mar 19 09:46:10.428537 master-0 kubenswrapper[27819]: I0319 09:46:10.428502 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" event={"ID":"2ec041b9-4f3a-472c-9410-5c14c32990fc","Type":"ContainerStarted","Data":"836c73e1379602dafac45d381114f84a393eda2fa09c7fc6f437101983bda906"} Mar 19 09:46:10.430273 master-0 kubenswrapper[27819]: I0319 09:46:10.430229 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" event={"ID":"2815df79-ea7a-4a2a-8701-417f5aa7a8a4","Type":"ContainerStarted","Data":"b84be27502408a281dfa8aad6d9a047a118db927389a5cd9710f5694b27dcf06"} Mar 19 09:46:10.432481 master-0 kubenswrapper[27819]: I0319 09:46:10.431461 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" event={"ID":"7aaa7fb9-ba38-49ae-8cca-6b699862813d","Type":"ContainerStarted","Data":"d34f4be9854c1695aa788d862abb2254607a5dccc92dd3a2312848dc95a17e89"} Mar 19 09:46:10.569399 master-0 kubenswrapper[27819]: I0319 09:46:10.569264 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r"] Mar 19 09:46:10.569399 master-0 kubenswrapper[27819]: I0319 09:46:10.569321 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w"] Mar 19 09:46:10.581269 master-0 kubenswrapper[27819]: I0319 09:46:10.577076 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2"] Mar 19 09:46:10.623205 master-0 kubenswrapper[27819]: I0319 09:46:10.617886 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.623205 master-0 kubenswrapper[27819]: I0319 09:46:10.618006 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:10.623205 master-0 kubenswrapper[27819]: E0319 09:46:10.618191 27819 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 09:46:10.623205 master-0 kubenswrapper[27819]: E0319 09:46:10.618256 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:11.618234248 +0000 UTC m=+756.539811940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "metrics-server-cert" not found Mar 19 09:46:10.623205 master-0 kubenswrapper[27819]: E0319 09:46:10.618370 27819 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 09:46:10.623205 master-0 kubenswrapper[27819]: E0319 09:46:10.618469 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:11.618447213 +0000 UTC m=+756.540024905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "webhook-server-cert" not found Mar 19 09:46:10.814253 master-0 kubenswrapper[27819]: W0319 09:46:10.814201 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e90888_d0f7_447c_965a_ed826e5668b5.slice/crio-ff022835989b3e064c8dca61b0bb7b81cb84cc95191e457bb975598b8760340e WatchSource:0}: Error finding container ff022835989b3e064c8dca61b0bb7b81cb84cc95191e457bb975598b8760340e: Status 404 returned error can't find the container with id ff022835989b3e064c8dca61b0bb7b81cb84cc95191e457bb975598b8760340e Mar 19 09:46:10.821294 master-0 kubenswrapper[27819]: I0319 09:46:10.821219 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:10.821485 master-0 kubenswrapper[27819]: E0319 09:46:10.821459 27819 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:10.821613 master-0 kubenswrapper[27819]: E0319 09:46:10.821518 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert podName:86af73bb-e6b4-4efd-a66c-8cb32bf3d02d nodeName:}" failed. No retries permitted until 2026-03-19 09:46:12.821500258 +0000 UTC m=+757.743077950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899dj6bn" (UID: "86af73bb-e6b4-4efd-a66c-8cb32bf3d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:10.822040 master-0 kubenswrapper[27819]: I0319 09:46:10.822011 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7"] Mar 19 09:46:10.840174 master-0 kubenswrapper[27819]: I0319 09:46:10.840126 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj"] Mar 19 09:46:10.849278 master-0 kubenswrapper[27819]: I0319 09:46:10.849237 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-czvjz"] Mar 19 09:46:11.446144 master-0 kubenswrapper[27819]: I0319 09:46:11.445950 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" event={"ID":"16c86233-8caa-4255-a0f0-f06ceb351389","Type":"ContainerStarted","Data":"70b4650d5bcf5ba6fede92256c3d5932daed10a841397ccd14243ffe71f6a97a"} Mar 19 09:46:11.448175 master-0 kubenswrapper[27819]: I0319 09:46:11.448106 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" event={"ID":"17e90888-d0f7-447c-965a-ed826e5668b5","Type":"ContainerStarted","Data":"ff022835989b3e064c8dca61b0bb7b81cb84cc95191e457bb975598b8760340e"} Mar 19 09:46:11.451670 master-0 kubenswrapper[27819]: I0319 09:46:11.451480 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" event={"ID":"8903724a-0684-45e7-918d-3a4a90c33d8f","Type":"ContainerStarted","Data":"f741449832426fc3e386cc0ad5e0f587d87599f26694596b46e7b8dc1c0207e6"} Mar 19 09:46:11.453389 master-0 kubenswrapper[27819]: I0319 09:46:11.453338 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" event={"ID":"d0c53c39-36fd-4e38-8031-e22003748723","Type":"ContainerStarted","Data":"55aff34e0612978175f1839fe4b3d91501b03d6e949be979b07cc896889dec19"} Mar 19 09:46:11.455413 master-0 kubenswrapper[27819]: I0319 09:46:11.455351 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" event={"ID":"914356dd-4ccd-4818-aeaf-16f32fc9a249","Type":"ContainerStarted","Data":"6aacd39daade4f1c22b2a129056c3456833b04031851d85c96fa93d3491e7097"} Mar 19 09:46:11.457747 master-0 kubenswrapper[27819]: I0319 09:46:11.457708 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" event={"ID":"8feb876d-d28d-45ba-8bae-02cf56732e7b","Type":"ContainerStarted","Data":"b9d5db9f9fee3b68a5c911f230a8363df2e94932824d0b4735e4ef85185487d6"} Mar 19 09:46:11.497319 master-0 kubenswrapper[27819]: I0319 09:46:11.497239 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg"] Mar 19 09:46:11.511693 master-0 kubenswrapper[27819]: W0319 09:46:11.511633 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42499139_f602_4061_97c5_e20d54d0731b.slice/crio-74bbf5fa5e61e3a0078e443748ee7865d1edb8e97fede59bbf686963dfa44d24 WatchSource:0}: Error finding container 74bbf5fa5e61e3a0078e443748ee7865d1edb8e97fede59bbf686963dfa44d24: Status 404 returned error can't find the container with id 74bbf5fa5e61e3a0078e443748ee7865d1edb8e97fede59bbf686963dfa44d24 Mar 19 09:46:11.638465 master-0 kubenswrapper[27819]: I0319 09:46:11.638396 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld"] Mar 19 09:46:11.640907 master-0 kubenswrapper[27819]: I0319 09:46:11.640862 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:11.640997 master-0 kubenswrapper[27819]: I0319 09:46:11.640957 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:11.641156 master-0 kubenswrapper[27819]: E0319 09:46:11.641137 27819 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 09:46:11.641206 master-0 kubenswrapper[27819]: E0319 09:46:11.641190 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:13.641175046 +0000 UTC m=+758.562752738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "metrics-server-cert" not found Mar 19 09:46:11.643466 master-0 kubenswrapper[27819]: E0319 09:46:11.643380 27819 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 09:46:11.643466 master-0 kubenswrapper[27819]: E0319 09:46:11.643443 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:13.643424377 +0000 UTC m=+758.565002069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "webhook-server-cert" not found Mar 19 09:46:11.655351 master-0 kubenswrapper[27819]: I0319 09:46:11.652648 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx"] Mar 19 09:46:11.656166 master-0 kubenswrapper[27819]: W0319 09:46:11.656119 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3579c069_58e4_4563_8f6f_67bf9950d04f.slice/crio-a7a398860b86cfb9ca50a38a5975fb24b9cd09c8af237b6c2f28af841e362af2 WatchSource:0}: Error finding container a7a398860b86cfb9ca50a38a5975fb24b9cd09c8af237b6c2f28af841e362af2: Status 404 returned error can't find the container with id a7a398860b86cfb9ca50a38a5975fb24b9cd09c8af237b6c2f28af841e362af2 Mar 19 09:46:11.677936 master-0 kubenswrapper[27819]: I0319 09:46:11.677875 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx"] Mar 19 09:46:11.701814 master-0 kubenswrapper[27819]: I0319 09:46:11.701632 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-j87zk"] Mar 19 09:46:11.718441 master-0 kubenswrapper[27819]: I0319 09:46:11.717998 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc"] Mar 19 09:46:11.730336 master-0 kubenswrapper[27819]: I0319 09:46:11.729951 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29"] Mar 19 09:46:12.049964 master-0 kubenswrapper[27819]: I0319 09:46:12.049605 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:12.049964 master-0 kubenswrapper[27819]: E0319 09:46:12.049757 27819 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:12.049964 master-0 kubenswrapper[27819]: E0319 09:46:12.049834 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert podName:185fa92f-4bb6-4254-a759-8162f8fa8a11 nodeName:}" failed. No retries permitted until 2026-03-19 09:46:16.049815373 +0000 UTC m=+760.971393065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert") pod "infra-operator-controller-manager-7dd6bb94c9-2lnjh" (UID: "185fa92f-4bb6-4254-a759-8162f8fa8a11") : secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:13.065620 master-0 kubenswrapper[27819]: I0319 09:46:13.064327 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:13.065620 master-0 kubenswrapper[27819]: E0319 09:46:13.064704 27819 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:13.065620 master-0 kubenswrapper[27819]: E0319 09:46:13.064765 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert podName:86af73bb-e6b4-4efd-a66c-8cb32bf3d02d nodeName:}" failed. No retries permitted until 2026-03-19 09:46:17.06474621 +0000 UTC m=+761.986323902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899dj6bn" (UID: "86af73bb-e6b4-4efd-a66c-8cb32bf3d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:13.151789 master-0 kubenswrapper[27819]: I0319 09:46:13.151714 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" event={"ID":"3579c069-58e4-4563-8f6f-67bf9950d04f","Type":"ContainerStarted","Data":"a7a398860b86cfb9ca50a38a5975fb24b9cd09c8af237b6c2f28af841e362af2"} Mar 19 09:46:13.177755 master-0 kubenswrapper[27819]: I0319 09:46:13.171725 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" event={"ID":"42499139-f602-4061-97c5-e20d54d0731b","Type":"ContainerStarted","Data":"74bbf5fa5e61e3a0078e443748ee7865d1edb8e97fede59bbf686963dfa44d24"} Mar 19 09:46:13.348420 master-0 kubenswrapper[27819]: W0319 09:46:13.344767 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ebb272_c362_4900_91ef_ccd4e48aaec4.slice/crio-8a418cbe60e70a01de8db825d8f0e0ceedeaa7b8c670c17c5ed59fc92fd47f8a WatchSource:0}: Error finding container 8a418cbe60e70a01de8db825d8f0e0ceedeaa7b8c670c17c5ed59fc92fd47f8a: Status 404 returned error can't find the container with id 8a418cbe60e70a01de8db825d8f0e0ceedeaa7b8c670c17c5ed59fc92fd47f8a Mar 19 09:46:13.683859 master-0 kubenswrapper[27819]: I0319 09:46:13.683796 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:13.684090 master-0 kubenswrapper[27819]: E0319 09:46:13.683979 27819 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 09:46:13.684090 master-0 kubenswrapper[27819]: E0319 09:46:13.684047 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:17.684027773 +0000 UTC m=+762.605605465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "webhook-server-cert" not found Mar 19 09:46:13.684090 master-0 kubenswrapper[27819]: I0319 09:46:13.684083 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:13.684369 master-0 kubenswrapper[27819]: E0319 09:46:13.684323 27819 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 09:46:13.684448 master-0 kubenswrapper[27819]: E0319 09:46:13.684419 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:17.684401481 +0000 UTC m=+762.605979173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "metrics-server-cert" not found Mar 19 09:46:14.187046 master-0 kubenswrapper[27819]: I0319 09:46:14.186976 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" event={"ID":"11871ed4-c83d-4b10-bf10-1507c85a3581","Type":"ContainerStarted","Data":"e5ac625498aecf2e2b8e8af5497d81a111ac7b0b432592d356b85d0a0c3bc970"} Mar 19 09:46:14.189168 master-0 kubenswrapper[27819]: I0319 09:46:14.189103 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" event={"ID":"4a786353-9349-45a7-a7b2-c506296f2525","Type":"ContainerStarted","Data":"b2bd9eb74bb5e7852a042aab8f57761188931ee0a7dfd85becfe2b9ac7f782fa"} Mar 19 09:46:14.191087 master-0 kubenswrapper[27819]: I0319 09:46:14.191036 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" event={"ID":"81ebb272-c362-4900-91ef-ccd4e48aaec4","Type":"ContainerStarted","Data":"8a418cbe60e70a01de8db825d8f0e0ceedeaa7b8c670c17c5ed59fc92fd47f8a"} Mar 19 09:46:14.193413 master-0 kubenswrapper[27819]: I0319 09:46:14.193370 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" event={"ID":"121a3aeb-64fc-43d2-8ba2-e88fe9349525","Type":"ContainerStarted","Data":"df1150ef80415fac64040d637ffc932e785841a843390757ca8e2019b31215d8"} Mar 19 09:46:14.194793 master-0 kubenswrapper[27819]: I0319 09:46:14.194747 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" event={"ID":"4283cfcf-3551-4738-8394-45bd27e4e8fa","Type":"ContainerStarted","Data":"5d597f9152c7009d20a7626df7fb22939a886a47af62f35d670f8e7eef0a3633"} Mar 19 09:46:16.123902 master-0 kubenswrapper[27819]: I0319 09:46:16.123857 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:16.124531 master-0 kubenswrapper[27819]: E0319 09:46:16.124014 27819 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:16.124626 master-0 kubenswrapper[27819]: E0319 09:46:16.124593 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert podName:185fa92f-4bb6-4254-a759-8162f8fa8a11 nodeName:}" failed. No retries permitted until 2026-03-19 09:46:24.1245717 +0000 UTC m=+769.046149392 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert") pod "infra-operator-controller-manager-7dd6bb94c9-2lnjh" (UID: "185fa92f-4bb6-4254-a759-8162f8fa8a11") : secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:17.065015 master-0 kubenswrapper[27819]: I0319 09:46:17.064853 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:17.065383 master-0 kubenswrapper[27819]: E0319 09:46:17.065072 27819 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:17.065383 master-0 kubenswrapper[27819]: E0319 09:46:17.065160 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert podName:86af73bb-e6b4-4efd-a66c-8cb32bf3d02d nodeName:}" failed. No retries permitted until 2026-03-19 09:46:25.065138886 +0000 UTC m=+769.986716588 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899dj6bn" (UID: "86af73bb-e6b4-4efd-a66c-8cb32bf3d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:17.686932 master-0 kubenswrapper[27819]: I0319 09:46:17.685518 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:17.686932 master-0 kubenswrapper[27819]: I0319 09:46:17.685681 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:17.686932 master-0 kubenswrapper[27819]: E0319 09:46:17.685806 27819 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 09:46:17.686932 master-0 kubenswrapper[27819]: E0319 09:46:17.685867 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:25.685852011 +0000 UTC m=+770.607429713 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "webhook-server-cert" not found Mar 19 09:46:17.686932 master-0 kubenswrapper[27819]: E0319 09:46:17.686173 27819 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 09:46:17.686932 master-0 kubenswrapper[27819]: E0319 09:46:17.686197 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:25.686189829 +0000 UTC m=+770.607767521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "metrics-server-cert" not found Mar 19 09:46:24.224793 master-0 kubenswrapper[27819]: I0319 09:46:24.224715 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:24.225436 master-0 kubenswrapper[27819]: E0319 09:46:24.224970 27819 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:24.225436 master-0 kubenswrapper[27819]: E0319 09:46:24.225070 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert podName:185fa92f-4bb6-4254-a759-8162f8fa8a11 nodeName:}" failed. No retries permitted until 2026-03-19 09:46:40.225048173 +0000 UTC m=+785.146625865 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert") pod "infra-operator-controller-manager-7dd6bb94c9-2lnjh" (UID: "185fa92f-4bb6-4254-a759-8162f8fa8a11") : secret "infra-operator-webhook-server-cert" not found Mar 19 09:46:25.148123 master-0 kubenswrapper[27819]: I0319 09:46:25.148035 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:25.148457 master-0 kubenswrapper[27819]: E0319 09:46:25.148364 27819 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:25.148457 master-0 kubenswrapper[27819]: E0319 09:46:25.148417 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert podName:86af73bb-e6b4-4efd-a66c-8cb32bf3d02d nodeName:}" failed. No retries permitted until 2026-03-19 09:46:41.148402761 +0000 UTC m=+786.069980453 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899dj6bn" (UID: "86af73bb-e6b4-4efd-a66c-8cb32bf3d02d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 19 09:46:25.759856 master-0 kubenswrapper[27819]: I0319 09:46:25.759495 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:25.759856 master-0 kubenswrapper[27819]: I0319 09:46:25.759680 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:25.759856 master-0 kubenswrapper[27819]: E0319 09:46:25.759683 27819 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 19 09:46:25.761276 master-0 kubenswrapper[27819]: E0319 09:46:25.759900 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:41.759876679 +0000 UTC m=+786.681454391 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "webhook-server-cert" not found Mar 19 09:46:25.761276 master-0 kubenswrapper[27819]: E0319 09:46:25.759753 27819 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 19 09:46:25.761276 master-0 kubenswrapper[27819]: E0319 09:46:25.759990 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs podName:8dca8916-c8e2-4a1c-a348-bf827c760abd nodeName:}" failed. No retries permitted until 2026-03-19 09:46:41.759971011 +0000 UTC m=+786.681548723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-sn8g2" (UID: "8dca8916-c8e2-4a1c-a348-bf827c760abd") : secret "metrics-server-cert" not found Mar 19 09:46:33.417452 master-0 kubenswrapper[27819]: I0319 09:46:33.417406 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" event={"ID":"11871ed4-c83d-4b10-bf10-1507c85a3581","Type":"ContainerStarted","Data":"5332923539a407fa5f597b4e208a7dc1d0afd210d7d840a391e0dfd85662b205"} Mar 19 09:46:33.419335 master-0 kubenswrapper[27819]: I0319 09:46:33.419311 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" event={"ID":"2815df79-ea7a-4a2a-8701-417f5aa7a8a4","Type":"ContainerStarted","Data":"cede7475493a05022a6d87834e3033c93d62e1858b253c4765ace99805ddac24"} Mar 19 09:46:33.421121 master-0 kubenswrapper[27819]: I0319 09:46:33.420636 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" Mar 19 09:46:33.428334 master-0 kubenswrapper[27819]: I0319 09:46:33.427860 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" event={"ID":"d0c53c39-36fd-4e38-8031-e22003748723","Type":"ContainerStarted","Data":"44c780794264ca81dc553465babe1fe0d15ef333bb776121b49adae37a8cfafb"} Mar 19 09:46:33.428477 master-0 kubenswrapper[27819]: I0319 09:46:33.428459 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" Mar 19 09:46:33.429839 master-0 kubenswrapper[27819]: I0319 09:46:33.429821 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" event={"ID":"914356dd-4ccd-4818-aeaf-16f32fc9a249","Type":"ContainerStarted","Data":"17acc35db91a725f1f55b6721abf39e1c06f97a12a5e3a1d2971c94089f3e06d"} Mar 19 09:46:33.430157 master-0 kubenswrapper[27819]: I0319 09:46:33.430094 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" Mar 19 09:46:33.433728 master-0 kubenswrapper[27819]: I0319 09:46:33.433703 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" event={"ID":"3579c069-58e4-4563-8f6f-67bf9950d04f","Type":"ContainerStarted","Data":"eca8d3dd272fddfcb58edbd23e9513747df7f92e5aa515a8914214139d96e1d8"} Mar 19 09:46:33.437079 master-0 kubenswrapper[27819]: I0319 09:46:33.436864 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" event={"ID":"81ebb272-c362-4900-91ef-ccd4e48aaec4","Type":"ContainerStarted","Data":"ba2efc81fa6594e86b603d30001efe8160072827097386c8ed325387f8f450bf"} Mar 19 09:46:33.437347 master-0 kubenswrapper[27819]: I0319 09:46:33.437330 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" Mar 19 09:46:33.438337 master-0 kubenswrapper[27819]: I0319 09:46:33.438305 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" event={"ID":"7aaa7fb9-ba38-49ae-8cca-6b699862813d","Type":"ContainerStarted","Data":"f18e48f648f825a85af94c1402ba0f220c80b59dd9d20f2134a5189cc8173c62"} Mar 19 09:46:33.441637 master-0 kubenswrapper[27819]: I0319 09:46:33.441595 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" event={"ID":"8feb876d-d28d-45ba-8bae-02cf56732e7b","Type":"ContainerStarted","Data":"24755583c3f0862ab9192c9a11cbafe85b07684c7a4a07bf974b7acee957c549"} Mar 19 09:46:33.442929 master-0 kubenswrapper[27819]: I0319 09:46:33.442903 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" event={"ID":"4a786353-9349-45a7-a7b2-c506296f2525","Type":"ContainerStarted","Data":"31688ff0412ee56c886adb051092587074305d4bbf9fe4cd448361cd59350e80"} Mar 19 09:46:33.443563 master-0 kubenswrapper[27819]: I0319 09:46:33.443522 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" Mar 19 09:46:33.454726 master-0 kubenswrapper[27819]: I0319 09:46:33.454689 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" event={"ID":"16c86233-8caa-4255-a0f0-f06ceb351389","Type":"ContainerStarted","Data":"6c5885600fadabbb5ac462e9718dca740c334ddccb228e883605938f85913a0f"} Mar 19 09:46:33.466705 master-0 kubenswrapper[27819]: I0319 09:46:33.466665 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" event={"ID":"1b2e9b70-1291-457f-a686-22ac2ee57053","Type":"ContainerStarted","Data":"179ee962628e857d7c57ec62dfd92c21000f1d123688e3a83d5e4cfe6ef55508"} Mar 19 09:46:33.474884 master-0 kubenswrapper[27819]: I0319 09:46:33.474840 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" event={"ID":"2ec041b9-4f3a-472c-9410-5c14c32990fc","Type":"ContainerStarted","Data":"eff73d4b53c80cae453bf1aa542d6295cb127ab92d2957dd7e07a66192a22913"} Mar 19 09:46:33.475728 master-0 kubenswrapper[27819]: I0319 09:46:33.475454 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" Mar 19 09:46:33.480731 master-0 kubenswrapper[27819]: I0319 09:46:33.480701 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" event={"ID":"17e90888-d0f7-447c-965a-ed826e5668b5","Type":"ContainerStarted","Data":"9f9d7c37d9599b6d7231bd614d8ecf613b04a534c017b4cb74a40c058a6fa1ae"} Mar 19 09:46:33.487887 master-0 kubenswrapper[27819]: I0319 09:46:33.487850 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" event={"ID":"8903724a-0684-45e7-918d-3a4a90c33d8f","Type":"ContainerStarted","Data":"f524a5360a6efd015d339710272928836e74c46ac48e44b219fa5cb6880fc4cf"} Mar 19 09:46:33.492502 master-0 kubenswrapper[27819]: I0319 09:46:33.492476 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" event={"ID":"42499139-f602-4061-97c5-e20d54d0731b","Type":"ContainerStarted","Data":"0dfae92253e162dcf02b7aaffc4d18536d17d858cb051b59203c836eb523204e"} Mar 19 09:46:33.503244 master-0 kubenswrapper[27819]: I0319 09:46:33.502135 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" event={"ID":"4283cfcf-3551-4738-8394-45bd27e4e8fa","Type":"ContainerStarted","Data":"b3adf7740697c39bf85d263280d315c066f6a8bd2fda5ec4ac2c22297b418d12"} Mar 19 09:46:33.504838 master-0 kubenswrapper[27819]: I0319 09:46:33.504795 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" event={"ID":"7f313579-c939-4189-be09-93430e848d9f","Type":"ContainerStarted","Data":"16d63143cfa58f810f012b1112ae4fbaff4547c7cdbb9f0304093310d7ed0d41"} Mar 19 09:46:33.505945 master-0 kubenswrapper[27819]: I0319 09:46:33.505922 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" event={"ID":"7d2df03a-c295-40bc-a256-936fc90ffe7f","Type":"ContainerStarted","Data":"809a8465f62bab3495d7808536d8d9105969b81efd9a28dd3616d80eb27bce3f"} Mar 19 09:46:33.507325 master-0 kubenswrapper[27819]: I0319 09:46:33.507303 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" Mar 19 09:46:33.515288 master-0 kubenswrapper[27819]: I0319 09:46:33.515222 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" event={"ID":"c066a62a-37a8-453c-9267-fe8a8b744076","Type":"ContainerStarted","Data":"1028e6fc9334c00f381659cc93fa0b08d62506e0f92f0629812d84ca63ef2af9"} Mar 19 09:46:33.515754 master-0 kubenswrapper[27819]: I0319 09:46:33.515723 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" Mar 19 09:46:33.519606 master-0 kubenswrapper[27819]: I0319 09:46:33.519578 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" event={"ID":"121a3aeb-64fc-43d2-8ba2-e88fe9349525","Type":"ContainerStarted","Data":"e6298a868fb48ec28d3957367fff7103902a9d9aa2d765b42dc0c5cd23a959aa"} Mar 19 09:46:34.528411 master-0 kubenswrapper[27819]: I0319 09:46:34.528360 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" Mar 19 09:46:34.528951 master-0 kubenswrapper[27819]: I0319 09:46:34.528484 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" Mar 19 09:46:34.528951 master-0 kubenswrapper[27819]: I0319 09:46:34.528785 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" Mar 19 09:46:34.529150 master-0 kubenswrapper[27819]: I0319 09:46:34.529128 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" Mar 19 09:46:34.529798 master-0 kubenswrapper[27819]: I0319 09:46:34.529769 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" Mar 19 09:46:34.530952 master-0 kubenswrapper[27819]: I0319 09:46:34.530926 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" Mar 19 09:46:34.531007 master-0 kubenswrapper[27819]: I0319 09:46:34.530985 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" Mar 19 09:46:34.531837 master-0 kubenswrapper[27819]: I0319 09:46:34.531814 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" Mar 19 09:46:34.531997 master-0 kubenswrapper[27819]: I0319 09:46:34.531977 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" Mar 19 09:46:34.541752 master-0 kubenswrapper[27819]: I0319 09:46:34.541684 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" podStartSLOduration=11.317013751 podStartE2EDuration="26.541668498s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:09.450305321 +0000 UTC m=+754.371883023" lastFinishedPulling="2026-03-19 09:46:24.674960078 +0000 UTC m=+769.596537770" observedRunningTime="2026-03-19 09:46:34.54089142 +0000 UTC m=+779.462469122" watchObservedRunningTime="2026-03-19 09:46:34.541668498 +0000 UTC m=+779.463246190" Mar 19 09:46:34.622573 master-0 kubenswrapper[27819]: I0319 09:46:34.621715 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" podStartSLOduration=5.79064832 podStartE2EDuration="26.621687667s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:10.817698443 +0000 UTC m=+755.739276135" lastFinishedPulling="2026-03-19 09:46:31.64873779 +0000 UTC m=+776.570315482" observedRunningTime="2026-03-19 09:46:34.616139562 +0000 UTC m=+779.537717254" watchObservedRunningTime="2026-03-19 09:46:34.621687667 +0000 UTC m=+779.543265359" Mar 19 09:46:34.841597 master-0 kubenswrapper[27819]: I0319 09:46:34.841437 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" podStartSLOduration=6.443020307 podStartE2EDuration="26.841412237s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:09.940221736 +0000 UTC m=+754.861799428" lastFinishedPulling="2026-03-19 09:46:30.338613666 +0000 UTC m=+775.260191358" observedRunningTime="2026-03-19 09:46:34.828516717 +0000 UTC m=+779.750094409" watchObservedRunningTime="2026-03-19 09:46:34.841412237 +0000 UTC m=+779.762989929" Mar 19 09:46:34.882915 master-0 kubenswrapper[27819]: I0319 09:46:34.882827 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" podStartSLOduration=6.091484113 podStartE2EDuration="26.882796157s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:10.857481787 +0000 UTC m=+755.779059479" lastFinishedPulling="2026-03-19 09:46:31.648793831 +0000 UTC m=+776.570371523" observedRunningTime="2026-03-19 09:46:34.773919959 +0000 UTC m=+779.695497651" watchObservedRunningTime="2026-03-19 09:46:34.882796157 +0000 UTC m=+779.804373859" Mar 19 09:46:34.929874 master-0 kubenswrapper[27819]: I0319 09:46:34.929799 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" podStartSLOduration=5.915694802 podStartE2EDuration="25.929780893s" podCreationTimestamp="2026-03-19 09:46:09 +0000 UTC" firstStartedPulling="2026-03-19 09:46:11.66760588 +0000 UTC m=+756.589183572" lastFinishedPulling="2026-03-19 09:46:31.681691971 +0000 UTC m=+776.603269663" observedRunningTime="2026-03-19 09:46:34.867622966 +0000 UTC m=+779.789200668" watchObservedRunningTime="2026-03-19 09:46:34.929780893 +0000 UTC m=+779.851358585" Mar 19 09:46:34.938171 master-0 kubenswrapper[27819]: I0319 09:46:34.935721 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" podStartSLOduration=10.463103114 podStartE2EDuration="26.935705056s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:10.601481162 +0000 UTC m=+755.523058854" lastFinishedPulling="2026-03-19 09:46:27.074083104 +0000 UTC m=+771.995660796" observedRunningTime="2026-03-19 09:46:34.909197271 +0000 UTC m=+779.830774963" watchObservedRunningTime="2026-03-19 09:46:34.935705056 +0000 UTC m=+779.857282748" Mar 19 09:46:35.023566 master-0 kubenswrapper[27819]: I0319 09:46:35.021099 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" podStartSLOduration=7.681300487 podStartE2EDuration="26.021054526s" podCreationTimestamp="2026-03-19 09:46:09 +0000 UTC" firstStartedPulling="2026-03-19 09:46:13.342930614 +0000 UTC m=+758.264508306" lastFinishedPulling="2026-03-19 09:46:31.682684653 +0000 UTC m=+776.604262345" observedRunningTime="2026-03-19 09:46:35.006778935 +0000 UTC m=+779.928356627" watchObservedRunningTime="2026-03-19 09:46:35.021054526 +0000 UTC m=+779.942632228" Mar 19 09:46:35.362250 master-0 kubenswrapper[27819]: I0319 09:46:35.362183 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" podStartSLOduration=5.616509375 podStartE2EDuration="27.362167234s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:09.895752796 +0000 UTC m=+754.817330488" lastFinishedPulling="2026-03-19 09:46:31.641410655 +0000 UTC m=+776.562988347" observedRunningTime="2026-03-19 09:46:35.357482239 +0000 UTC m=+780.279059931" watchObservedRunningTime="2026-03-19 09:46:35.362167234 +0000 UTC m=+780.283744926" Mar 19 09:46:35.444225 master-0 kubenswrapper[27819]: I0319 09:46:35.444145 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" podStartSLOduration=6.404503271 podStartE2EDuration="27.444126737s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:10.602132426 +0000 UTC m=+755.523710118" lastFinishedPulling="2026-03-19 09:46:31.641755892 +0000 UTC m=+776.563333584" observedRunningTime="2026-03-19 09:46:35.440693479 +0000 UTC m=+780.362271171" watchObservedRunningTime="2026-03-19 09:46:35.444126737 +0000 UTC m=+780.365704429" Mar 19 09:46:35.450997 master-0 kubenswrapper[27819]: I0319 09:46:35.450927 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" podStartSLOduration=9.129701897 podStartE2EDuration="27.450912489s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:13.341776318 +0000 UTC m=+758.263354010" lastFinishedPulling="2026-03-19 09:46:31.66298691 +0000 UTC m=+776.584564602" observedRunningTime="2026-03-19 09:46:35.399841261 +0000 UTC m=+780.321418953" watchObservedRunningTime="2026-03-19 09:46:35.450912489 +0000 UTC m=+780.372490181" Mar 19 09:46:35.471198 master-0 kubenswrapper[27819]: I0319 09:46:35.471115 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5nvqc" podStartSLOduration=8.138898244 podStartE2EDuration="26.471098003s" podCreationTimestamp="2026-03-19 09:46:09 +0000 UTC" firstStartedPulling="2026-03-19 09:46:13.336876338 +0000 UTC m=+758.258454030" lastFinishedPulling="2026-03-19 09:46:31.669076097 +0000 UTC m=+776.590653789" observedRunningTime="2026-03-19 09:46:35.466463389 +0000 UTC m=+780.388041091" watchObservedRunningTime="2026-03-19 09:46:35.471098003 +0000 UTC m=+780.392675695" Mar 19 09:46:35.496671 master-0 kubenswrapper[27819]: I0319 09:46:35.496581 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" podStartSLOduration=8.20322454 podStartE2EDuration="26.496565885s" podCreationTimestamp="2026-03-19 09:46:09 +0000 UTC" firstStartedPulling="2026-03-19 09:46:13.347532468 +0000 UTC m=+758.269110160" lastFinishedPulling="2026-03-19 09:46:31.640873813 +0000 UTC m=+776.562451505" observedRunningTime="2026-03-19 09:46:35.493698911 +0000 UTC m=+780.415276623" watchObservedRunningTime="2026-03-19 09:46:35.496565885 +0000 UTC m=+780.418143577" Mar 19 09:46:35.521869 master-0 kubenswrapper[27819]: I0319 09:46:35.521782 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" podStartSLOduration=14.012082498 podStartE2EDuration="28.521758442s" podCreationTimestamp="2026-03-19 09:46:07 +0000 UTC" firstStartedPulling="2026-03-19 09:46:09.09623164 +0000 UTC m=+754.017809332" lastFinishedPulling="2026-03-19 09:46:23.605907584 +0000 UTC m=+768.527485276" observedRunningTime="2026-03-19 09:46:35.51324141 +0000 UTC m=+780.434819112" watchObservedRunningTime="2026-03-19 09:46:35.521758442 +0000 UTC m=+780.443336144" Mar 19 09:46:35.539826 master-0 kubenswrapper[27819]: I0319 09:46:35.539751 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" podStartSLOduration=6.412213365 podStartE2EDuration="26.539733236s" podCreationTimestamp="2026-03-19 09:46:09 +0000 UTC" firstStartedPulling="2026-03-19 09:46:11.515842898 +0000 UTC m=+756.437420590" lastFinishedPulling="2026-03-19 09:46:31.643362739 +0000 UTC m=+776.564940461" observedRunningTime="2026-03-19 09:46:35.536855422 +0000 UTC m=+780.458433114" watchObservedRunningTime="2026-03-19 09:46:35.539733236 +0000 UTC m=+780.461310928" Mar 19 09:46:35.571742 master-0 kubenswrapper[27819]: I0319 09:46:35.571660 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" podStartSLOduration=8.176866172 podStartE2EDuration="28.571636903s" podCreationTimestamp="2026-03-19 09:46:07 +0000 UTC" firstStartedPulling="2026-03-19 09:46:09.600360814 +0000 UTC m=+754.521938506" lastFinishedPulling="2026-03-19 09:46:29.995131555 +0000 UTC m=+774.916709237" observedRunningTime="2026-03-19 09:46:35.569807703 +0000 UTC m=+780.491385395" watchObservedRunningTime="2026-03-19 09:46:35.571636903 +0000 UTC m=+780.493214595" Mar 19 09:46:35.622309 master-0 kubenswrapper[27819]: I0319 09:46:35.621688 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" podStartSLOduration=7.737305424 podStartE2EDuration="27.621668048s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:10.817666372 +0000 UTC m=+755.739244064" lastFinishedPulling="2026-03-19 09:46:30.702028996 +0000 UTC m=+775.623606688" observedRunningTime="2026-03-19 09:46:35.604117694 +0000 UTC m=+780.525695386" watchObservedRunningTime="2026-03-19 09:46:35.621668048 +0000 UTC m=+780.543245740" Mar 19 09:46:35.626392 master-0 kubenswrapper[27819]: I0319 09:46:35.626339 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" podStartSLOduration=7.037576282 podStartE2EDuration="28.626329853s" podCreationTimestamp="2026-03-19 09:46:07 +0000 UTC" firstStartedPulling="2026-03-19 09:46:10.053097523 +0000 UTC m=+754.974675215" lastFinishedPulling="2026-03-19 09:46:31.641851094 +0000 UTC m=+776.563428786" observedRunningTime="2026-03-19 09:46:35.625019833 +0000 UTC m=+780.546597535" watchObservedRunningTime="2026-03-19 09:46:35.626329853 +0000 UTC m=+780.547907545" Mar 19 09:46:35.646699 master-0 kubenswrapper[27819]: I0319 09:46:35.646625 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" podStartSLOduration=6.606220766 podStartE2EDuration="27.646610289s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:10.601269987 +0000 UTC m=+755.522847679" lastFinishedPulling="2026-03-19 09:46:31.64165948 +0000 UTC m=+776.563237202" observedRunningTime="2026-03-19 09:46:35.645016383 +0000 UTC m=+780.566594075" watchObservedRunningTime="2026-03-19 09:46:35.646610289 +0000 UTC m=+780.568187981" Mar 19 09:46:35.678834 master-0 kubenswrapper[27819]: I0319 09:46:35.678618 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" podStartSLOduration=8.400499365 podStartE2EDuration="26.678593778s" podCreationTimestamp="2026-03-19 09:46:09 +0000 UTC" firstStartedPulling="2026-03-19 09:46:13.370703118 +0000 UTC m=+758.292280810" lastFinishedPulling="2026-03-19 09:46:31.648797491 +0000 UTC m=+776.570375223" observedRunningTime="2026-03-19 09:46:35.677380281 +0000 UTC m=+780.598957973" watchObservedRunningTime="2026-03-19 09:46:35.678593778 +0000 UTC m=+780.600171470" Mar 19 09:46:35.703485 master-0 kubenswrapper[27819]: I0319 09:46:35.702923 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" podStartSLOduration=12.978214265 podStartE2EDuration="28.702903574s" podCreationTimestamp="2026-03-19 09:46:07 +0000 UTC" firstStartedPulling="2026-03-19 09:46:08.950271789 +0000 UTC m=+753.871849491" lastFinishedPulling="2026-03-19 09:46:24.674961108 +0000 UTC m=+769.596538800" observedRunningTime="2026-03-19 09:46:35.69690864 +0000 UTC m=+780.618486332" watchObservedRunningTime="2026-03-19 09:46:35.702903574 +0000 UTC m=+780.624481266" Mar 19 09:46:38.257067 master-0 kubenswrapper[27819]: I0319 09:46:38.256980 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-j9jj8" Mar 19 09:46:38.307641 master-0 kubenswrapper[27819]: I0319 09:46:38.307575 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-h4zlg" Mar 19 09:46:38.358202 master-0 kubenswrapper[27819]: I0319 09:46:38.357358 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-w2bpb" Mar 19 09:46:38.437893 master-0 kubenswrapper[27819]: I0319 09:46:38.437842 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-zl6sj" Mar 19 09:46:38.464508 master-0 kubenswrapper[27819]: I0319 09:46:38.464149 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-4pvtb" Mar 19 09:46:38.813437 master-0 kubenswrapper[27819]: I0319 09:46:38.813374 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-wljbj" Mar 19 09:46:38.884361 master-0 kubenswrapper[27819]: I0319 09:46:38.884319 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-knjcc" Mar 19 09:46:39.017679 master-0 kubenswrapper[27819]: I0319 09:46:39.017005 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-bzns2" Mar 19 09:46:39.064737 master-0 kubenswrapper[27819]: I0319 09:46:39.062451 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-g4k7r" Mar 19 09:46:39.178728 master-0 kubenswrapper[27819]: I0319 09:46:39.178681 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-zpr6w" Mar 19 09:46:39.306042 master-0 kubenswrapper[27819]: I0319 09:46:39.305980 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" Mar 19 09:46:39.308504 master-0 kubenswrapper[27819]: I0319 09:46:39.308465 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rcdb7" Mar 19 09:46:39.359572 master-0 kubenswrapper[27819]: I0319 09:46:39.359502 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-hm9pj" Mar 19 09:46:39.399510 master-0 kubenswrapper[27819]: I0319 09:46:39.399468 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-bmnbx" Mar 19 09:46:39.513584 master-0 kubenswrapper[27819]: I0319 09:46:39.513486 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-czvjz" Mar 19 09:46:39.587574 master-0 kubenswrapper[27819]: I0319 09:46:39.586404 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-8j8rx" Mar 19 09:46:39.875570 master-0 kubenswrapper[27819]: I0319 09:46:39.875453 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-j87zk" Mar 19 09:46:39.962709 master-0 kubenswrapper[27819]: I0319 09:46:39.962519 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-jmprg" Mar 19 09:46:40.001568 master-0 kubenswrapper[27819]: I0319 09:46:39.999996 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-d7wld" Mar 19 09:46:40.009578 master-0 kubenswrapper[27819]: I0319 09:46:40.009246 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" Mar 19 09:46:40.026574 master-0 kubenswrapper[27819]: I0319 09:46:40.023053 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-8pg29" Mar 19 09:46:40.257288 master-0 kubenswrapper[27819]: I0319 09:46:40.257099 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:40.260459 master-0 kubenswrapper[27819]: I0319 09:46:40.260414 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/185fa92f-4bb6-4254-a759-8162f8fa8a11-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-2lnjh\" (UID: \"185fa92f-4bb6-4254-a759-8162f8fa8a11\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:40.522995 master-0 kubenswrapper[27819]: I0319 09:46:40.522867 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:41.173062 master-0 kubenswrapper[27819]: I0319 09:46:41.173003 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:41.176997 master-0 kubenswrapper[27819]: I0319 09:46:41.176955 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86af73bb-e6b4-4efd-a66c-8cb32bf3d02d-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899dj6bn\" (UID: \"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:41.259717 master-0 kubenswrapper[27819]: I0319 09:46:41.259634 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:41.783929 master-0 kubenswrapper[27819]: I0319 09:46:41.783847 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:41.784457 master-0 kubenswrapper[27819]: I0319 09:46:41.783946 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:41.787726 master-0 kubenswrapper[27819]: I0319 09:46:41.787679 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:41.787896 master-0 kubenswrapper[27819]: I0319 09:46:41.787841 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8dca8916-c8e2-4a1c-a348-bf827c760abd-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-sn8g2\" (UID: \"8dca8916-c8e2-4a1c-a348-bf827c760abd\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:42.017131 master-0 kubenswrapper[27819]: I0319 09:46:42.017072 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:42.575177 master-0 kubenswrapper[27819]: I0319 09:46:42.575039 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh"] Mar 19 09:46:42.580762 master-0 kubenswrapper[27819]: W0319 09:46:42.580700 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod185fa92f_4bb6_4254_a759_8162f8fa8a11.slice/crio-7024eda6d35737495bf61b31e83875d7387fcd6761b3a8e02551cf18896327d5 WatchSource:0}: Error finding container 7024eda6d35737495bf61b31e83875d7387fcd6761b3a8e02551cf18896327d5: Status 404 returned error can't find the container with id 7024eda6d35737495bf61b31e83875d7387fcd6761b3a8e02551cf18896327d5 Mar 19 09:46:42.640975 master-0 kubenswrapper[27819]: I0319 09:46:42.637637 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" event={"ID":"185fa92f-4bb6-4254-a759-8162f8fa8a11","Type":"ContainerStarted","Data":"7024eda6d35737495bf61b31e83875d7387fcd6761b3a8e02551cf18896327d5"} Mar 19 09:46:42.648786 master-0 kubenswrapper[27819]: I0319 09:46:42.648745 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn"] Mar 19 09:46:43.648622 master-0 kubenswrapper[27819]: I0319 09:46:43.648245 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" event={"ID":"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d","Type":"ContainerStarted","Data":"b28454dc6c5bd1e956e40396137e1d8685b76f2bcc6cd45c4d38cd79c746c5cb"} Mar 19 09:46:43.761100 master-0 kubenswrapper[27819]: I0319 09:46:43.761057 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2"] Mar 19 09:46:44.659392 master-0 kubenswrapper[27819]: I0319 09:46:44.659347 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" event={"ID":"8dca8916-c8e2-4a1c-a348-bf827c760abd","Type":"ContainerStarted","Data":"3ca94b32d2d40aa2210c910632fbd8d84e790936d3d9e3342ccead6ac8f9385a"} Mar 19 09:46:44.660019 master-0 kubenswrapper[27819]: I0319 09:46:44.659995 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" event={"ID":"8dca8916-c8e2-4a1c-a348-bf827c760abd","Type":"ContainerStarted","Data":"12eb877a41c58c02fb0ed32dd3427189d62c57edf942302fbc8b087c0493512d"} Mar 19 09:46:44.661171 master-0 kubenswrapper[27819]: I0319 09:46:44.661155 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:46:44.747630 master-0 kubenswrapper[27819]: I0319 09:46:44.747382 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" podStartSLOduration=35.74735408 podStartE2EDuration="35.74735408s" podCreationTimestamp="2026-03-19 09:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:46:44.736950526 +0000 UTC m=+789.658528218" watchObservedRunningTime="2026-03-19 09:46:44.74735408 +0000 UTC m=+789.668931772" Mar 19 09:46:46.681465 master-0 kubenswrapper[27819]: I0319 09:46:46.681390 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" event={"ID":"86af73bb-e6b4-4efd-a66c-8cb32bf3d02d","Type":"ContainerStarted","Data":"99a44ea030dabe9620a7f929b79c3d7e154d7a792e796b7a0b1c1d327cf61002"} Mar 19 09:46:46.682203 master-0 kubenswrapper[27819]: I0319 09:46:46.681978 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:46.683933 master-0 kubenswrapper[27819]: I0319 09:46:46.683884 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" event={"ID":"185fa92f-4bb6-4254-a759-8162f8fa8a11","Type":"ContainerStarted","Data":"7744f47af661987494817d5a26ebc43ddd4efbbc9ede215a39002feeb312e7cf"} Mar 19 09:46:46.684034 master-0 kubenswrapper[27819]: I0319 09:46:46.683944 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:46:46.717236 master-0 kubenswrapper[27819]: I0319 09:46:46.717052 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" podStartSLOduration=34.912444307 podStartE2EDuration="38.717027991s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:42.649505126 +0000 UTC m=+787.571082818" lastFinishedPulling="2026-03-19 09:46:46.45408881 +0000 UTC m=+791.375666502" observedRunningTime="2026-03-19 09:46:46.716891448 +0000 UTC m=+791.638469150" watchObservedRunningTime="2026-03-19 09:46:46.717027991 +0000 UTC m=+791.638605683" Mar 19 09:46:46.743898 master-0 kubenswrapper[27819]: I0319 09:46:46.743825 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" podStartSLOduration=34.878606536 podStartE2EDuration="38.743804943s" podCreationTimestamp="2026-03-19 09:46:08 +0000 UTC" firstStartedPulling="2026-03-19 09:46:42.5834165 +0000 UTC m=+787.504994192" lastFinishedPulling="2026-03-19 09:46:46.448614907 +0000 UTC m=+791.370192599" observedRunningTime="2026-03-19 09:46:46.737028621 +0000 UTC m=+791.658606313" watchObservedRunningTime="2026-03-19 09:46:46.743804943 +0000 UTC m=+791.665382635" Mar 19 09:46:51.265891 master-0 kubenswrapper[27819]: I0319 09:46:51.265834 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899dj6bn" Mar 19 09:46:52.024802 master-0 kubenswrapper[27819]: I0319 09:46:52.024733 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-sn8g2" Mar 19 09:47:00.531227 master-0 kubenswrapper[27819]: I0319 09:47:00.531161 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-2lnjh" Mar 19 09:47:37.784139 master-0 kubenswrapper[27819]: I0319 09:47:37.784076 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-ssxws"] Mar 19 09:47:37.787184 master-0 kubenswrapper[27819]: I0319 09:47:37.787153 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:37.800649 master-0 kubenswrapper[27819]: I0319 09:47:37.792304 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 19 09:47:37.800649 master-0 kubenswrapper[27819]: I0319 09:47:37.793092 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 19 09:47:37.800649 master-0 kubenswrapper[27819]: I0319 09:47:37.793215 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 19 09:47:37.827129 master-0 kubenswrapper[27819]: I0319 09:47:37.825911 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc41f78-40cd-4fce-8749-2b3d239d18cb-config\") pod \"dnsmasq-dns-685c76cf85-ssxws\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:37.827129 master-0 kubenswrapper[27819]: I0319 09:47:37.825970 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxfd\" (UniqueName: \"kubernetes.io/projected/ecc41f78-40cd-4fce-8749-2b3d239d18cb-kube-api-access-nsxfd\") pod \"dnsmasq-dns-685c76cf85-ssxws\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:37.827129 master-0 kubenswrapper[27819]: I0319 09:47:37.826100 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-ssxws"] Mar 19 09:47:37.878169 master-0 kubenswrapper[27819]: I0319 09:47:37.878102 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-vrtcp"] Mar 19 09:47:37.882531 master-0 kubenswrapper[27819]: I0319 09:47:37.882021 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:37.886406 master-0 kubenswrapper[27819]: I0319 09:47:37.886367 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 19 09:47:37.895051 master-0 kubenswrapper[27819]: I0319 09:47:37.894978 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-vrtcp"] Mar 19 09:47:37.939162 master-0 kubenswrapper[27819]: I0319 09:47:37.939105 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:37.939381 master-0 kubenswrapper[27819]: I0319 09:47:37.939186 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dftqw\" (UniqueName: \"kubernetes.io/projected/5279e013-1d5b-4c59-8deb-9fb1cc70212c-kube-api-access-dftqw\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:37.939381 master-0 kubenswrapper[27819]: I0319 09:47:37.939299 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc41f78-40cd-4fce-8749-2b3d239d18cb-config\") pod \"dnsmasq-dns-685c76cf85-ssxws\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:37.939381 master-0 kubenswrapper[27819]: I0319 09:47:37.939363 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-config\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:37.939496 master-0 kubenswrapper[27819]: I0319 09:47:37.939392 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxfd\" (UniqueName: \"kubernetes.io/projected/ecc41f78-40cd-4fce-8749-2b3d239d18cb-kube-api-access-nsxfd\") pod \"dnsmasq-dns-685c76cf85-ssxws\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:37.940727 master-0 kubenswrapper[27819]: I0319 09:47:37.940701 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc41f78-40cd-4fce-8749-2b3d239d18cb-config\") pod \"dnsmasq-dns-685c76cf85-ssxws\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:37.968404 master-0 kubenswrapper[27819]: I0319 09:47:37.968362 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxfd\" (UniqueName: \"kubernetes.io/projected/ecc41f78-40cd-4fce-8749-2b3d239d18cb-kube-api-access-nsxfd\") pod \"dnsmasq-dns-685c76cf85-ssxws\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:38.042419 master-0 kubenswrapper[27819]: I0319 09:47:38.040825 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-config\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:38.042419 master-0 kubenswrapper[27819]: I0319 09:47:38.040917 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:38.042419 master-0 kubenswrapper[27819]: I0319 09:47:38.040948 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dftqw\" (UniqueName: \"kubernetes.io/projected/5279e013-1d5b-4c59-8deb-9fb1cc70212c-kube-api-access-dftqw\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:38.042419 master-0 kubenswrapper[27819]: I0319 09:47:38.041921 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-dns-svc\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:38.043095 master-0 kubenswrapper[27819]: I0319 09:47:38.043044 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-config\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:38.070397 master-0 kubenswrapper[27819]: I0319 09:47:38.070340 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dftqw\" (UniqueName: \"kubernetes.io/projected/5279e013-1d5b-4c59-8deb-9fb1cc70212c-kube-api-access-dftqw\") pod \"dnsmasq-dns-8476fd89bc-vrtcp\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:38.125490 master-0 kubenswrapper[27819]: I0319 09:47:38.125427 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:47:38.247942 master-0 kubenswrapper[27819]: I0319 09:47:38.247859 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:47:38.575039 master-0 kubenswrapper[27819]: I0319 09:47:38.574022 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-ssxws"] Mar 19 09:47:38.589569 master-0 kubenswrapper[27819]: I0319 09:47:38.589517 27819 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 09:47:38.753108 master-0 kubenswrapper[27819]: I0319 09:47:38.751187 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-vrtcp"] Mar 19 09:47:38.753108 master-0 kubenswrapper[27819]: W0319 09:47:38.751884 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5279e013_1d5b_4c59_8deb_9fb1cc70212c.slice/crio-418668ec1e2e104968acdb5be2c8cde4fbb6e6276cf552c6153572830b46b2d5 WatchSource:0}: Error finding container 418668ec1e2e104968acdb5be2c8cde4fbb6e6276cf552c6153572830b46b2d5: Status 404 returned error can't find the container with id 418668ec1e2e104968acdb5be2c8cde4fbb6e6276cf552c6153572830b46b2d5 Mar 19 09:47:39.217193 master-0 kubenswrapper[27819]: I0319 09:47:39.217120 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" event={"ID":"ecc41f78-40cd-4fce-8749-2b3d239d18cb","Type":"ContainerStarted","Data":"6e6860c4e6e3052b6b34f117b20ae6ba92e7ea26eeb4151ebc2cd03c74996199"} Mar 19 09:47:39.218381 master-0 kubenswrapper[27819]: I0319 09:47:39.218313 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" event={"ID":"5279e013-1d5b-4c59-8deb-9fb1cc70212c","Type":"ContainerStarted","Data":"418668ec1e2e104968acdb5be2c8cde4fbb6e6276cf552c6153572830b46b2d5"} Mar 19 09:47:40.708625 master-0 kubenswrapper[27819]: I0319 09:47:40.708579 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-ssxws"] Mar 19 09:47:40.721700 master-0 kubenswrapper[27819]: I0319 09:47:40.721444 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hgs6b"] Mar 19 09:47:40.723036 master-0 kubenswrapper[27819]: I0319 09:47:40.723013 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.753223 master-0 kubenswrapper[27819]: I0319 09:47:40.738195 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hgs6b"] Mar 19 09:47:40.832489 master-0 kubenswrapper[27819]: I0319 09:47:40.832331 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-config\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.832489 master-0 kubenswrapper[27819]: I0319 09:47:40.832413 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-dns-svc\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.832489 master-0 kubenswrapper[27819]: I0319 09:47:40.832476 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k8pn\" (UniqueName: \"kubernetes.io/projected/9759090c-c277-4412-93dc-6bc7da2985c0-kube-api-access-5k8pn\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.938226 master-0 kubenswrapper[27819]: I0319 09:47:40.937848 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-dns-svc\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.938943 master-0 kubenswrapper[27819]: I0319 09:47:40.938908 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k8pn\" (UniqueName: \"kubernetes.io/projected/9759090c-c277-4412-93dc-6bc7da2985c0-kube-api-access-5k8pn\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.944817 master-0 kubenswrapper[27819]: I0319 09:47:40.938941 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-dns-svc\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.944817 master-0 kubenswrapper[27819]: I0319 09:47:40.941096 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-config\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.944817 master-0 kubenswrapper[27819]: I0319 09:47:40.939532 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-config\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:40.977363 master-0 kubenswrapper[27819]: I0319 09:47:40.977283 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k8pn\" (UniqueName: \"kubernetes.io/projected/9759090c-c277-4412-93dc-6bc7da2985c0-kube-api-access-5k8pn\") pod \"dnsmasq-dns-76849d6659-hgs6b\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:41.076234 master-0 kubenswrapper[27819]: I0319 09:47:41.067497 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:47:41.085059 master-0 kubenswrapper[27819]: I0319 09:47:41.080152 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-vrtcp"] Mar 19 09:47:41.152628 master-0 kubenswrapper[27819]: I0319 09:47:41.151625 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv"] Mar 19 09:47:41.156622 master-0 kubenswrapper[27819]: I0319 09:47:41.153355 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.156622 master-0 kubenswrapper[27819]: I0319 09:47:41.153530 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv"] Mar 19 09:47:41.257469 master-0 kubenswrapper[27819]: I0319 09:47:41.257338 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.261293 master-0 kubenswrapper[27819]: I0319 09:47:41.261163 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-config\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.262008 master-0 kubenswrapper[27819]: I0319 09:47:41.261596 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdrk\" (UniqueName: \"kubernetes.io/projected/073d89ae-4524-4fda-87d1-bdd81ef69236-kube-api-access-jvdrk\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.365169 master-0 kubenswrapper[27819]: I0319 09:47:41.364950 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-config\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.365169 master-0 kubenswrapper[27819]: I0319 09:47:41.365078 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdrk\" (UniqueName: \"kubernetes.io/projected/073d89ae-4524-4fda-87d1-bdd81ef69236-kube-api-access-jvdrk\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.368780 master-0 kubenswrapper[27819]: I0319 09:47:41.367627 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.368780 master-0 kubenswrapper[27819]: I0319 09:47:41.368396 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-dns-svc\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.371039 master-0 kubenswrapper[27819]: I0319 09:47:41.370994 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-config\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.403697 master-0 kubenswrapper[27819]: I0319 09:47:41.403537 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdrk\" (UniqueName: \"kubernetes.io/projected/073d89ae-4524-4fda-87d1-bdd81ef69236-kube-api-access-jvdrk\") pod \"dnsmasq-dns-6ff8fd9d5c-bsrqv\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:41.533979 master-0 kubenswrapper[27819]: I0319 09:47:41.533391 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:47:43.238768 master-0 kubenswrapper[27819]: W0319 09:47:43.234057 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod073d89ae_4524_4fda_87d1_bdd81ef69236.slice/crio-a929db7f5a708c25b8dbee87c6dc53bf4684978b1b167295cc0dce049d47140d WatchSource:0}: Error finding container a929db7f5a708c25b8dbee87c6dc53bf4684978b1b167295cc0dce049d47140d: Status 404 returned error can't find the container with id a929db7f5a708c25b8dbee87c6dc53bf4684978b1b167295cc0dce049d47140d Mar 19 09:47:43.242712 master-0 kubenswrapper[27819]: I0319 09:47:43.241786 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hgs6b"] Mar 19 09:47:43.243687 master-0 kubenswrapper[27819]: W0319 09:47:43.243654 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9759090c_c277_4412_93dc_6bc7da2985c0.slice/crio-0265aa058aa7ba08cf003a2726f73baa81afb0acbd53faba51bd828c6af51ca0 WatchSource:0}: Error finding container 0265aa058aa7ba08cf003a2726f73baa81afb0acbd53faba51bd828c6af51ca0: Status 404 returned error can't find the container with id 0265aa058aa7ba08cf003a2726f73baa81afb0acbd53faba51bd828c6af51ca0 Mar 19 09:47:43.250513 master-0 kubenswrapper[27819]: I0319 09:47:43.250470 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv"] Mar 19 09:47:43.356622 master-0 kubenswrapper[27819]: I0319 09:47:43.356569 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" event={"ID":"9759090c-c277-4412-93dc-6bc7da2985c0","Type":"ContainerStarted","Data":"0265aa058aa7ba08cf003a2726f73baa81afb0acbd53faba51bd828c6af51ca0"} Mar 19 09:47:43.357624 master-0 kubenswrapper[27819]: I0319 09:47:43.357599 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" event={"ID":"073d89ae-4524-4fda-87d1-bdd81ef69236","Type":"ContainerStarted","Data":"a929db7f5a708c25b8dbee87c6dc53bf4684978b1b167295cc0dce049d47140d"} Mar 19 09:47:44.892098 master-0 kubenswrapper[27819]: I0319 09:47:44.882646 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 09:47:44.892098 master-0 kubenswrapper[27819]: I0319 09:47:44.884368 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:44.892098 master-0 kubenswrapper[27819]: I0319 09:47:44.886758 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 19 09:47:44.899482 master-0 kubenswrapper[27819]: I0319 09:47:44.898329 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 19 09:47:44.899482 master-0 kubenswrapper[27819]: I0319 09:47:44.898836 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 19 09:47:44.899482 master-0 kubenswrapper[27819]: I0319 09:47:44.899006 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 19 09:47:44.899482 master-0 kubenswrapper[27819]: I0319 09:47:44.899153 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 19 09:47:44.899482 master-0 kubenswrapper[27819]: I0319 09:47:44.899338 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 19 09:47:44.950044 master-0 kubenswrapper[27819]: I0319 09:47:44.949943 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.005359 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.005433 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.005483 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-78b8f9cf-6b4e-4f81-8d12-186ff31521e4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9c6db859-108c-47ff-ac07-50986b70208d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.005513 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.005769 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.005817 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.006028 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.006051 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.006087 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqg6t\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-kube-api-access-gqg6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.006166 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.008632 master-0 kubenswrapper[27819]: I0319 09:47:45.006231 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108358 master-0 kubenswrapper[27819]: I0319 09:47:45.108303 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108568 master-0 kubenswrapper[27819]: I0319 09:47:45.108417 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108568 master-0 kubenswrapper[27819]: I0319 09:47:45.108437 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108568 master-0 kubenswrapper[27819]: I0319 09:47:45.108457 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqg6t\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-kube-api-access-gqg6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108568 master-0 kubenswrapper[27819]: I0319 09:47:45.108487 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108568 master-0 kubenswrapper[27819]: I0319 09:47:45.108518 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108568 master-0 kubenswrapper[27819]: I0319 09:47:45.108548 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108753 master-0 kubenswrapper[27819]: I0319 09:47:45.108581 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108753 master-0 kubenswrapper[27819]: I0319 09:47:45.108603 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-78b8f9cf-6b4e-4f81-8d12-186ff31521e4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9c6db859-108c-47ff-ac07-50986b70208d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108753 master-0 kubenswrapper[27819]: I0319 09:47:45.108620 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.108753 master-0 kubenswrapper[27819]: I0319 09:47:45.108647 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.111543 master-0 kubenswrapper[27819]: I0319 09:47:45.109895 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.111543 master-0 kubenswrapper[27819]: I0319 09:47:45.111085 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.112611 master-0 kubenswrapper[27819]: I0319 09:47:45.112518 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.113054 master-0 kubenswrapper[27819]: I0319 09:47:45.113010 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.115266 master-0 kubenswrapper[27819]: I0319 09:47:45.114320 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.115266 master-0 kubenswrapper[27819]: I0319 09:47:45.114682 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:47:45.115266 master-0 kubenswrapper[27819]: I0319 09:47:45.114730 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-78b8f9cf-6b4e-4f81-8d12-186ff31521e4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9c6db859-108c-47ff-ac07-50986b70208d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/580b468450bce0bb09b6e46edeb79c1e9993b53d70deaf16f6340fa161a0b2e9/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.125276 master-0 kubenswrapper[27819]: I0319 09:47:45.124259 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.125276 master-0 kubenswrapper[27819]: I0319 09:47:45.124985 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.129257 master-0 kubenswrapper[27819]: I0319 09:47:45.129075 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.129257 master-0 kubenswrapper[27819]: I0319 09:47:45.129220 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.208186 master-0 kubenswrapper[27819]: I0319 09:47:45.207531 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqg6t\" (UniqueName: \"kubernetes.io/projected/ef67f907-aead-43e3-aa5f-3a4f7887cf9c-kube-api-access-gqg6t\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:45.386435 master-0 kubenswrapper[27819]: I0319 09:47:45.385801 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 19 09:47:45.391577 master-0 kubenswrapper[27819]: I0319 09:47:45.387702 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 19 09:47:45.395575 master-0 kubenswrapper[27819]: I0319 09:47:45.393527 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 19 09:47:45.407578 master-0 kubenswrapper[27819]: I0319 09:47:45.398879 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 19 09:47:45.407578 master-0 kubenswrapper[27819]: I0319 09:47:45.404693 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 19 09:47:45.458425 master-0 kubenswrapper[27819]: I0319 09:47:45.458169 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 19 09:47:45.553498 master-0 kubenswrapper[27819]: I0319 09:47:45.553425 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b77f57a3-97e7-49e5-b10d-e352b75a0655-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.553713 master-0 kubenswrapper[27819]: I0319 09:47:45.553624 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh2cs\" (UniqueName: \"kubernetes.io/projected/b77f57a3-97e7-49e5-b10d-e352b75a0655-kube-api-access-rh2cs\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.558737 master-0 kubenswrapper[27819]: I0319 09:47:45.558693 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77f57a3-97e7-49e5-b10d-e352b75a0655-config-data\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.561938 master-0 kubenswrapper[27819]: I0319 09:47:45.561887 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b77f57a3-97e7-49e5-b10d-e352b75a0655-kolla-config\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.562472 master-0 kubenswrapper[27819]: I0319 09:47:45.562422 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77f57a3-97e7-49e5-b10d-e352b75a0655-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.665003 master-0 kubenswrapper[27819]: I0319 09:47:45.664952 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b77f57a3-97e7-49e5-b10d-e352b75a0655-kolla-config\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.665770 master-0 kubenswrapper[27819]: I0319 09:47:45.665719 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b77f57a3-97e7-49e5-b10d-e352b75a0655-kolla-config\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.666353 master-0 kubenswrapper[27819]: I0319 09:47:45.666311 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77f57a3-97e7-49e5-b10d-e352b75a0655-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.666422 master-0 kubenswrapper[27819]: I0319 09:47:45.666396 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b77f57a3-97e7-49e5-b10d-e352b75a0655-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.666460 master-0 kubenswrapper[27819]: I0319 09:47:45.666426 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh2cs\" (UniqueName: \"kubernetes.io/projected/b77f57a3-97e7-49e5-b10d-e352b75a0655-kube-api-access-rh2cs\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.666679 master-0 kubenswrapper[27819]: I0319 09:47:45.666513 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77f57a3-97e7-49e5-b10d-e352b75a0655-config-data\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.667631 master-0 kubenswrapper[27819]: I0319 09:47:45.667563 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b77f57a3-97e7-49e5-b10d-e352b75a0655-config-data\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.677337 master-0 kubenswrapper[27819]: I0319 09:47:45.677280 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b77f57a3-97e7-49e5-b10d-e352b75a0655-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.678866 master-0 kubenswrapper[27819]: I0319 09:47:45.678836 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b77f57a3-97e7-49e5-b10d-e352b75a0655-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.693801 master-0 kubenswrapper[27819]: I0319 09:47:45.693739 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh2cs\" (UniqueName: \"kubernetes.io/projected/b77f57a3-97e7-49e5-b10d-e352b75a0655-kube-api-access-rh2cs\") pod \"memcached-0\" (UID: \"b77f57a3-97e7-49e5-b10d-e352b75a0655\") " pod="openstack/memcached-0" Mar 19 09:47:45.751670 master-0 kubenswrapper[27819]: I0319 09:47:45.751525 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 19 09:47:46.380663 master-0 kubenswrapper[27819]: I0319 09:47:46.374166 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 09:47:46.380663 master-0 kubenswrapper[27819]: I0319 09:47:46.375924 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.380663 master-0 kubenswrapper[27819]: I0319 09:47:46.378811 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 19 09:47:46.380663 master-0 kubenswrapper[27819]: I0319 09:47:46.378987 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 19 09:47:46.380663 master-0 kubenswrapper[27819]: I0319 09:47:46.379090 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 19 09:47:46.380663 master-0 kubenswrapper[27819]: I0319 09:47:46.379185 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 19 09:47:46.383343 master-0 kubenswrapper[27819]: I0319 09:47:46.382325 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 19 09:47:46.383343 master-0 kubenswrapper[27819]: I0319 09:47:46.383069 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 19 09:47:46.399678 master-0 kubenswrapper[27819]: I0319 09:47:46.398391 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 09:47:46.482078 master-0 kubenswrapper[27819]: I0319 09:47:46.481954 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482078 master-0 kubenswrapper[27819]: I0319 09:47:46.482031 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482388 master-0 kubenswrapper[27819]: I0319 09:47:46.482118 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482388 master-0 kubenswrapper[27819]: I0319 09:47:46.482188 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482388 master-0 kubenswrapper[27819]: I0319 09:47:46.482234 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc8qc\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-kube-api-access-kc8qc\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482388 master-0 kubenswrapper[27819]: I0319 09:47:46.482282 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482388 master-0 kubenswrapper[27819]: I0319 09:47:46.482304 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3963c46c-0e6f-4a21-9719-469a187d3100-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482388 master-0 kubenswrapper[27819]: I0319 09:47:46.482344 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3963c46c-0e6f-4a21-9719-469a187d3100-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482688 master-0 kubenswrapper[27819]: I0319 09:47:46.482433 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482688 master-0 kubenswrapper[27819]: I0319 09:47:46.482470 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9eedb610-2ca4-4b57-8105-c67ff32cb3fb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ac11290f-f1e9-4f54-967e-51d38e6e21fd\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.482688 master-0 kubenswrapper[27819]: I0319 09:47:46.482536 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-config-data\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.584608 master-0 kubenswrapper[27819]: I0319 09:47:46.584541 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.584608 master-0 kubenswrapper[27819]: I0319 09:47:46.584612 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-9eedb610-2ca4-4b57-8105-c67ff32cb3fb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ac11290f-f1e9-4f54-967e-51d38e6e21fd\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584655 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-config-data\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584694 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584725 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584748 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584765 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584784 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc8qc\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-kube-api-access-kc8qc\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584805 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584821 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3963c46c-0e6f-4a21-9719-469a187d3100-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.584848 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3963c46c-0e6f-4a21-9719-469a187d3100-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.586349 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.587117 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.588446 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.588689 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-config-data\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.591425 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.591463 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-9eedb610-2ca4-4b57-8105-c67ff32cb3fb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ac11290f-f1e9-4f54-967e-51d38e6e21fd\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f555b58f7d2cef9c9a013f7f6ac68d4257e7273301e79cf3b354da481d4207b1/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.595755 master-0 kubenswrapper[27819]: I0319 09:47:46.591720 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3963c46c-0e6f-4a21-9719-469a187d3100-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.603162 master-0 kubenswrapper[27819]: I0319 09:47:46.602453 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.611904 master-0 kubenswrapper[27819]: I0319 09:47:46.611745 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3963c46c-0e6f-4a21-9719-469a187d3100-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.612096 master-0 kubenswrapper[27819]: I0319 09:47:46.611904 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.630764 master-0 kubenswrapper[27819]: I0319 09:47:46.630707 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc8qc\" (UniqueName: \"kubernetes.io/projected/3963c46c-0e6f-4a21-9719-469a187d3100-kube-api-access-kc8qc\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.630979 master-0 kubenswrapper[27819]: I0319 09:47:46.630885 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3963c46c-0e6f-4a21-9719-469a187d3100-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:46.824619 master-0 kubenswrapper[27819]: I0319 09:47:46.824460 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-78b8f9cf-6b4e-4f81-8d12-186ff31521e4\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9c6db859-108c-47ff-ac07-50986b70208d\") pod \"rabbitmq-cell1-server-0\" (UID: \"ef67f907-aead-43e3-aa5f-3a4f7887cf9c\") " pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:47.063569 master-0 kubenswrapper[27819]: I0319 09:47:47.046852 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 19 09:47:47.063569 master-0 kubenswrapper[27819]: I0319 09:47:47.047172 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:47:47.063569 master-0 kubenswrapper[27819]: I0319 09:47:47.051021 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 19 09:47:47.063569 master-0 kubenswrapper[27819]: I0319 09:47:47.053670 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 19 09:47:47.063569 master-0 kubenswrapper[27819]: I0319 09:47:47.055266 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 19 09:47:47.063569 master-0 kubenswrapper[27819]: I0319 09:47:47.055283 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 19 09:47:47.160571 master-0 kubenswrapper[27819]: I0319 09:47:47.148266 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 19 09:47:47.209114 master-0 kubenswrapper[27819]: I0319 09:47:47.209044 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c6734183-4b45-41b2-9119-b052f090f77a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b1635f-8e90-448a-a4e5-53538b81928f\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.209695 master-0 kubenswrapper[27819]: I0319 09:47:47.209167 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-config-data-default\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.209695 master-0 kubenswrapper[27819]: I0319 09:47:47.209307 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-kolla-config\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.209695 master-0 kubenswrapper[27819]: I0319 09:47:47.209388 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clk82\" (UniqueName: \"kubernetes.io/projected/63768452-d82b-4b66-a5ed-dcc87ddac4f6-kube-api-access-clk82\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.209695 master-0 kubenswrapper[27819]: I0319 09:47:47.209465 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/63768452-d82b-4b66-a5ed-dcc87ddac4f6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.209695 master-0 kubenswrapper[27819]: I0319 09:47:47.209501 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.209695 master-0 kubenswrapper[27819]: I0319 09:47:47.209534 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/63768452-d82b-4b66-a5ed-dcc87ddac4f6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.209695 master-0 kubenswrapper[27819]: I0319 09:47:47.209659 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63768452-d82b-4b66-a5ed-dcc87ddac4f6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.312501 master-0 kubenswrapper[27819]: I0319 09:47:47.312433 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63768452-d82b-4b66-a5ed-dcc87ddac4f6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.312795 master-0 kubenswrapper[27819]: I0319 09:47:47.312738 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c6734183-4b45-41b2-9119-b052f090f77a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b1635f-8e90-448a-a4e5-53538b81928f\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.312941 master-0 kubenswrapper[27819]: I0319 09:47:47.312897 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-config-data-default\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.313065 master-0 kubenswrapper[27819]: I0319 09:47:47.313040 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-kolla-config\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.313187 master-0 kubenswrapper[27819]: I0319 09:47:47.313154 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clk82\" (UniqueName: \"kubernetes.io/projected/63768452-d82b-4b66-a5ed-dcc87ddac4f6-kube-api-access-clk82\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.313278 master-0 kubenswrapper[27819]: I0319 09:47:47.313253 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/63768452-d82b-4b66-a5ed-dcc87ddac4f6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.313320 master-0 kubenswrapper[27819]: I0319 09:47:47.313291 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.313353 master-0 kubenswrapper[27819]: I0319 09:47:47.313319 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/63768452-d82b-4b66-a5ed-dcc87ddac4f6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.315498 master-0 kubenswrapper[27819]: I0319 09:47:47.315434 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-kolla-config\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.316241 master-0 kubenswrapper[27819]: I0319 09:47:47.316201 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-config-data-default\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.316830 master-0 kubenswrapper[27819]: I0319 09:47:47.316772 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/63768452-d82b-4b66-a5ed-dcc87ddac4f6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.317129 master-0 kubenswrapper[27819]: I0319 09:47:47.317101 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63768452-d82b-4b66-a5ed-dcc87ddac4f6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.318498 master-0 kubenswrapper[27819]: I0319 09:47:47.318461 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63768452-d82b-4b66-a5ed-dcc87ddac4f6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.319164 master-0 kubenswrapper[27819]: I0319 09:47:47.319134 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:47:47.319216 master-0 kubenswrapper[27819]: I0319 09:47:47.319169 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c6734183-4b45-41b2-9119-b052f090f77a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b1635f-8e90-448a-a4e5-53538b81928f\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/0216bc891c368c83b5ce4611ef9932e4456d77a58aa3fc1567ead8c943f49dea/globalmount\"" pod="openstack/openstack-galera-0" Mar 19 09:47:47.321096 master-0 kubenswrapper[27819]: I0319 09:47:47.321056 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/63768452-d82b-4b66-a5ed-dcc87ddac4f6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:47.336923 master-0 kubenswrapper[27819]: I0319 09:47:47.336846 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clk82\" (UniqueName: \"kubernetes.io/projected/63768452-d82b-4b66-a5ed-dcc87ddac4f6-kube-api-access-clk82\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:48.145580 master-0 kubenswrapper[27819]: I0319 09:47:48.137199 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 09:47:48.145580 master-0 kubenswrapper[27819]: I0319 09:47:48.140370 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.145580 master-0 kubenswrapper[27819]: I0319 09:47:48.143683 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 19 09:47:48.145580 master-0 kubenswrapper[27819]: I0319 09:47:48.145069 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 19 09:47:48.149572 master-0 kubenswrapper[27819]: I0319 09:47:48.146457 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 19 09:47:48.199564 master-0 kubenswrapper[27819]: I0319 09:47:48.198743 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.242222 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4431ae-26be-48d2-a988-bcc22db96846-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.242297 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb4431ae-26be-48d2-a988-bcc22db96846-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.242412 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntcst\" (UniqueName: \"kubernetes.io/projected/cb4431ae-26be-48d2-a988-bcc22db96846-kube-api-access-ntcst\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.242535 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.242589 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0b38f51-6247-4757-a279-175e58afe9f0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1c86e501-fc52-497e-a7e6-f548c900ad75\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.242947 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4431ae-26be-48d2-a988-bcc22db96846-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.243026 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.248740 master-0 kubenswrapper[27819]: I0319 09:47:48.243110 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.344859 master-0 kubenswrapper[27819]: I0319 09:47:48.344798 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4431ae-26be-48d2-a988-bcc22db96846-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.344859 master-0 kubenswrapper[27819]: I0319 09:47:48.344853 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb4431ae-26be-48d2-a988-bcc22db96846-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.345185 master-0 kubenswrapper[27819]: I0319 09:47:48.344878 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntcst\" (UniqueName: \"kubernetes.io/projected/cb4431ae-26be-48d2-a988-bcc22db96846-kube-api-access-ntcst\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.345185 master-0 kubenswrapper[27819]: I0319 09:47:48.344982 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.345185 master-0 kubenswrapper[27819]: I0319 09:47:48.345034 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b0b38f51-6247-4757-a279-175e58afe9f0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1c86e501-fc52-497e-a7e6-f548c900ad75\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.345185 master-0 kubenswrapper[27819]: I0319 09:47:48.345084 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4431ae-26be-48d2-a988-bcc22db96846-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.345185 master-0 kubenswrapper[27819]: I0319 09:47:48.345120 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.345185 master-0 kubenswrapper[27819]: I0319 09:47:48.345149 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.346981 master-0 kubenswrapper[27819]: I0319 09:47:48.346947 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb4431ae-26be-48d2-a988-bcc22db96846-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.347374 master-0 kubenswrapper[27819]: I0319 09:47:48.347338 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-9eedb610-2ca4-4b57-8105-c67ff32cb3fb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ac11290f-f1e9-4f54-967e-51d38e6e21fd\") pod \"rabbitmq-server-0\" (UID: \"3963c46c-0e6f-4a21-9719-469a187d3100\") " pod="openstack/rabbitmq-server-0" Mar 19 09:47:48.350052 master-0 kubenswrapper[27819]: I0319 09:47:48.349862 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.350052 master-0 kubenswrapper[27819]: I0319 09:47:48.349921 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.350271 master-0 kubenswrapper[27819]: I0319 09:47:48.350217 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb4431ae-26be-48d2-a988-bcc22db96846-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.351756 master-0 kubenswrapper[27819]: I0319 09:47:48.351697 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:47:48.351756 master-0 kubenswrapper[27819]: I0319 09:47:48.351735 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b0b38f51-6247-4757-a279-175e58afe9f0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1c86e501-fc52-497e-a7e6-f548c900ad75\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/9d15a9b77b4d685dd4f521ebdeb428013f0038e736d0d6c3946f2a8b10bd1327/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.352005 master-0 kubenswrapper[27819]: I0319 09:47:48.351965 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4431ae-26be-48d2-a988-bcc22db96846-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.355652 master-0 kubenswrapper[27819]: I0319 09:47:48.355613 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4431ae-26be-48d2-a988-bcc22db96846-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.366858 master-0 kubenswrapper[27819]: I0319 09:47:48.366454 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntcst\" (UniqueName: \"kubernetes.io/projected/cb4431ae-26be-48d2-a988-bcc22db96846-kube-api-access-ntcst\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:48.505667 master-0 kubenswrapper[27819]: I0319 09:47:48.505272 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 19 09:47:49.345943 master-0 kubenswrapper[27819]: I0319 09:47:49.345851 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c6734183-4b45-41b2-9119-b052f090f77a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^54b1635f-8e90-448a-a4e5-53538b81928f\") pod \"openstack-galera-0\" (UID: \"63768452-d82b-4b66-a5ed-dcc87ddac4f6\") " pod="openstack/openstack-galera-0" Mar 19 09:47:49.479369 master-0 kubenswrapper[27819]: I0319 09:47:49.478899 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 19 09:47:50.384810 master-0 kubenswrapper[27819]: I0319 09:47:50.384751 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b0b38f51-6247-4757-a279-175e58afe9f0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^1c86e501-fc52-497e-a7e6-f548c900ad75\") pod \"openstack-cell1-galera-0\" (UID: \"cb4431ae-26be-48d2-a988-bcc22db96846\") " pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:50.637994 master-0 kubenswrapper[27819]: I0319 09:47:50.637819 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 19 09:47:51.876676 master-0 kubenswrapper[27819]: I0319 09:47:51.876600 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8jlp5"] Mar 19 09:47:51.878167 master-0 kubenswrapper[27819]: I0319 09:47:51.878135 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.880105 master-0 kubenswrapper[27819]: I0319 09:47:51.880062 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 19 09:47:51.883333 master-0 kubenswrapper[27819]: I0319 09:47:51.883301 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 19 09:47:51.929621 master-0 kubenswrapper[27819]: I0319 09:47:51.929564 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jlp5"] Mar 19 09:47:51.943110 master-0 kubenswrapper[27819]: I0319 09:47:51.943049 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwffg\" (UniqueName: \"kubernetes.io/projected/0a614f48-076d-402e-8eec-10df235bb1b8-kube-api-access-mwffg\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.943323 master-0 kubenswrapper[27819]: I0319 09:47:51.943137 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-run-ovn\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.943323 master-0 kubenswrapper[27819]: I0319 09:47:51.943198 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a614f48-076d-402e-8eec-10df235bb1b8-ovn-controller-tls-certs\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.943323 master-0 kubenswrapper[27819]: I0319 09:47:51.943238 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a614f48-076d-402e-8eec-10df235bb1b8-scripts\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.943323 master-0 kubenswrapper[27819]: I0319 09:47:51.943285 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-log-ovn\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.943323 master-0 kubenswrapper[27819]: I0319 09:47:51.943321 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a614f48-076d-402e-8eec-10df235bb1b8-combined-ca-bundle\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.943607 master-0 kubenswrapper[27819]: I0319 09:47:51.943356 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-run\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:51.978039 master-0 kubenswrapper[27819]: I0319 09:47:51.977977 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-kgb6x"] Mar 19 09:47:51.981202 master-0 kubenswrapper[27819]: I0319 09:47:51.981026 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.003912 master-0 kubenswrapper[27819]: I0319 09:47:52.003822 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kgb6x"] Mar 19 09:47:52.045629 master-0 kubenswrapper[27819]: I0319 09:47:52.045567 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwffg\" (UniqueName: \"kubernetes.io/projected/0a614f48-076d-402e-8eec-10df235bb1b8-kube-api-access-mwffg\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.045912 master-0 kubenswrapper[27819]: I0319 09:47:52.045819 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-etc-ovs\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.045912 master-0 kubenswrapper[27819]: I0319 09:47:52.045899 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-run-ovn\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.046007 master-0 kubenswrapper[27819]: I0319 09:47:52.045980 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-run\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.046066 master-0 kubenswrapper[27819]: I0319 09:47:52.046039 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a614f48-076d-402e-8eec-10df235bb1b8-ovn-controller-tls-certs\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.046113 master-0 kubenswrapper[27819]: I0319 09:47:52.046082 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a614f48-076d-402e-8eec-10df235bb1b8-scripts\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.046150 master-0 kubenswrapper[27819]: I0319 09:47:52.046134 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a239e6e-3d2d-44d3-b11b-0636618a1719-scripts\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.046186 master-0 kubenswrapper[27819]: I0319 09:47:52.046163 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-log-ovn\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.046224 master-0 kubenswrapper[27819]: I0319 09:47:52.046204 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a614f48-076d-402e-8eec-10df235bb1b8-combined-ca-bundle\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.046264 master-0 kubenswrapper[27819]: I0319 09:47:52.046228 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-log\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.046264 master-0 kubenswrapper[27819]: I0319 09:47:52.046255 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-lib\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.046338 master-0 kubenswrapper[27819]: I0319 09:47:52.046287 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-run\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.046338 master-0 kubenswrapper[27819]: I0319 09:47:52.046311 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7k8f\" (UniqueName: \"kubernetes.io/projected/4a239e6e-3d2d-44d3-b11b-0636618a1719-kube-api-access-s7k8f\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.046646 master-0 kubenswrapper[27819]: I0319 09:47:52.046606 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-run-ovn\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.047250 master-0 kubenswrapper[27819]: I0319 09:47:52.047210 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-log-ovn\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.047365 master-0 kubenswrapper[27819]: I0319 09:47:52.047347 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0a614f48-076d-402e-8eec-10df235bb1b8-var-run\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.050591 master-0 kubenswrapper[27819]: I0319 09:47:52.049407 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a614f48-076d-402e-8eec-10df235bb1b8-ovn-controller-tls-certs\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.050591 master-0 kubenswrapper[27819]: I0319 09:47:52.049534 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a614f48-076d-402e-8eec-10df235bb1b8-scripts\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.053081 master-0 kubenswrapper[27819]: I0319 09:47:52.051413 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a614f48-076d-402e-8eec-10df235bb1b8-combined-ca-bundle\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.085629 master-0 kubenswrapper[27819]: I0319 09:47:52.073409 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwffg\" (UniqueName: \"kubernetes.io/projected/0a614f48-076d-402e-8eec-10df235bb1b8-kube-api-access-mwffg\") pod \"ovn-controller-8jlp5\" (UID: \"0a614f48-076d-402e-8eec-10df235bb1b8\") " pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.148370 master-0 kubenswrapper[27819]: I0319 09:47:52.148256 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-run\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.148589 master-0 kubenswrapper[27819]: I0319 09:47:52.148140 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-run\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.148589 master-0 kubenswrapper[27819]: I0319 09:47:52.148454 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a239e6e-3d2d-44d3-b11b-0636618a1719-scripts\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.148791 master-0 kubenswrapper[27819]: I0319 09:47:52.148737 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-log\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.148839 master-0 kubenswrapper[27819]: I0319 09:47:52.148801 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-log\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.148878 master-0 kubenswrapper[27819]: I0319 09:47:52.148847 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-lib\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.149127 master-0 kubenswrapper[27819]: I0319 09:47:52.149108 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-var-lib\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.149203 master-0 kubenswrapper[27819]: I0319 09:47:52.149186 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7k8f\" (UniqueName: \"kubernetes.io/projected/4a239e6e-3d2d-44d3-b11b-0636618a1719-kube-api-access-s7k8f\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.149324 master-0 kubenswrapper[27819]: I0319 09:47:52.149308 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-etc-ovs\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.149598 master-0 kubenswrapper[27819]: I0319 09:47:52.149567 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/4a239e6e-3d2d-44d3-b11b-0636618a1719-etc-ovs\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.153278 master-0 kubenswrapper[27819]: I0319 09:47:52.153239 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a239e6e-3d2d-44d3-b11b-0636618a1719-scripts\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.163946 master-0 kubenswrapper[27819]: I0319 09:47:52.163895 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7k8f\" (UniqueName: \"kubernetes.io/projected/4a239e6e-3d2d-44d3-b11b-0636618a1719-kube-api-access-s7k8f\") pod \"ovn-controller-ovs-kgb6x\" (UID: \"4a239e6e-3d2d-44d3-b11b-0636618a1719\") " pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:52.221286 master-0 kubenswrapper[27819]: I0319 09:47:52.220886 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jlp5" Mar 19 09:47:52.307560 master-0 kubenswrapper[27819]: I0319 09:47:52.306387 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:47:56.781687 master-0 kubenswrapper[27819]: I0319 09:47:56.781618 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 09:47:56.783694 master-0 kubenswrapper[27819]: I0319 09:47:56.783403 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.787313 master-0 kubenswrapper[27819]: I0319 09:47:56.787276 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 19 09:47:56.787480 master-0 kubenswrapper[27819]: I0319 09:47:56.787434 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 19 09:47:56.787566 master-0 kubenswrapper[27819]: I0319 09:47:56.787489 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 19 09:47:56.787823 master-0 kubenswrapper[27819]: I0319 09:47:56.787449 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 19 09:47:56.810612 master-0 kubenswrapper[27819]: I0319 09:47:56.806213 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 09:47:56.956587 master-0 kubenswrapper[27819]: I0319 09:47:56.956519 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-config\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.956587 master-0 kubenswrapper[27819]: I0319 09:47:56.956592 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf7b941a-b73f-4273-98e0-d29ed2bad5d7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbeccf14-30e2-41b1-a605-c56ad2b46d35\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.956833 master-0 kubenswrapper[27819]: I0319 09:47:56.956613 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.957801 master-0 kubenswrapper[27819]: I0319 09:47:56.957760 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.957933 master-0 kubenswrapper[27819]: I0319 09:47:56.957895 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.958114 master-0 kubenswrapper[27819]: I0319 09:47:56.958084 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4945\" (UniqueName: \"kubernetes.io/projected/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-kube-api-access-g4945\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.958158 master-0 kubenswrapper[27819]: I0319 09:47:56.958122 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:56.958198 master-0 kubenswrapper[27819]: I0319 09:47:56.958164 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.059985 master-0 kubenswrapper[27819]: I0319 09:47:57.059867 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.059985 master-0 kubenswrapper[27819]: I0319 09:47:57.059958 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.060223 master-0 kubenswrapper[27819]: I0319 09:47:57.060019 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4945\" (UniqueName: \"kubernetes.io/projected/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-kube-api-access-g4945\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.060223 master-0 kubenswrapper[27819]: I0319 09:47:57.060051 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.060223 master-0 kubenswrapper[27819]: I0319 09:47:57.060081 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.060223 master-0 kubenswrapper[27819]: I0319 09:47:57.060132 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-config\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.060223 master-0 kubenswrapper[27819]: I0319 09:47:57.060163 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf7b941a-b73f-4273-98e0-d29ed2bad5d7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbeccf14-30e2-41b1-a605-c56ad2b46d35\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.060223 master-0 kubenswrapper[27819]: I0319 09:47:57.060183 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.061425 master-0 kubenswrapper[27819]: I0319 09:47:57.061387 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.061507 master-0 kubenswrapper[27819]: I0319 09:47:57.061422 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.063910 master-0 kubenswrapper[27819]: I0319 09:47:57.063865 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-config\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.064873 master-0 kubenswrapper[27819]: I0319 09:47:57.064844 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:47:57.064966 master-0 kubenswrapper[27819]: I0319 09:47:57.064886 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf7b941a-b73f-4273-98e0-d29ed2bad5d7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbeccf14-30e2-41b1-a605-c56ad2b46d35\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/129bb751a36e117da327633b374f2556591681b9eae922f305f509d32bf7c373/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.064966 master-0 kubenswrapper[27819]: I0319 09:47:57.064892 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.069853 master-0 kubenswrapper[27819]: I0319 09:47:57.069817 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.069955 master-0 kubenswrapper[27819]: I0319 09:47:57.069817 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:57.084615 master-0 kubenswrapper[27819]: I0319 09:47:57.084439 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4945\" (UniqueName: \"kubernetes.io/projected/9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac-kube-api-access-g4945\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:58.044802 master-0 kubenswrapper[27819]: I0319 09:47:58.044737 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 09:47:58.046975 master-0 kubenswrapper[27819]: I0319 09:47:58.046898 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.051019 master-0 kubenswrapper[27819]: I0319 09:47:58.050957 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 19 09:47:58.051150 master-0 kubenswrapper[27819]: I0319 09:47:58.051108 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 19 09:47:58.051292 master-0 kubenswrapper[27819]: I0319 09:47:58.051252 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 19 09:47:58.074627 master-0 kubenswrapper[27819]: I0319 09:47:58.074558 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 09:47:58.185475 master-0 kubenswrapper[27819]: I0319 09:47:58.185407 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb1b49e8-0aab-4155-bc5d-b0662b950a56-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.185475 master-0 kubenswrapper[27819]: I0319 09:47:58.185467 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1b49e8-0aab-4155-bc5d-b0662b950a56-config\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.185753 master-0 kubenswrapper[27819]: I0319 09:47:58.185493 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c9527d9a-d447-4bb3-9b52-f9bcc7c0851a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a14855b5-d67a-46b8-afd5-c186280b7c59\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.185753 master-0 kubenswrapper[27819]: I0319 09:47:58.185519 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.185949 master-0 kubenswrapper[27819]: I0319 09:47:58.185897 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ccvk\" (UniqueName: \"kubernetes.io/projected/fb1b49e8-0aab-4155-bc5d-b0662b950a56-kube-api-access-4ccvk\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.186227 master-0 kubenswrapper[27819]: I0319 09:47:58.186201 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.186292 master-0 kubenswrapper[27819]: I0319 09:47:58.186271 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.186378 master-0 kubenswrapper[27819]: I0319 09:47:58.186357 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fb1b49e8-0aab-4155-bc5d-b0662b950a56-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288432 master-0 kubenswrapper[27819]: I0319 09:47:58.288306 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288639 master-0 kubenswrapper[27819]: I0319 09:47:58.288507 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288639 master-0 kubenswrapper[27819]: I0319 09:47:58.288623 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fb1b49e8-0aab-4155-bc5d-b0662b950a56-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288709 master-0 kubenswrapper[27819]: I0319 09:47:58.288665 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb1b49e8-0aab-4155-bc5d-b0662b950a56-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288709 master-0 kubenswrapper[27819]: I0319 09:47:58.288689 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1b49e8-0aab-4155-bc5d-b0662b950a56-config\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288802 master-0 kubenswrapper[27819]: I0319 09:47:58.288709 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c9527d9a-d447-4bb3-9b52-f9bcc7c0851a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a14855b5-d67a-46b8-afd5-c186280b7c59\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288802 master-0 kubenswrapper[27819]: I0319 09:47:58.288730 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.288890 master-0 kubenswrapper[27819]: I0319 09:47:58.288819 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ccvk\" (UniqueName: \"kubernetes.io/projected/fb1b49e8-0aab-4155-bc5d-b0662b950a56-kube-api-access-4ccvk\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.290151 master-0 kubenswrapper[27819]: I0319 09:47:58.290101 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fb1b49e8-0aab-4155-bc5d-b0662b950a56-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.290619 master-0 kubenswrapper[27819]: I0319 09:47:58.290532 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1b49e8-0aab-4155-bc5d-b0662b950a56-config\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.290694 master-0 kubenswrapper[27819]: I0319 09:47:58.290641 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb1b49e8-0aab-4155-bc5d-b0662b950a56-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.292719 master-0 kubenswrapper[27819]: I0319 09:47:58.292680 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:47:58.292794 master-0 kubenswrapper[27819]: I0319 09:47:58.292735 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c9527d9a-d447-4bb3-9b52-f9bcc7c0851a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a14855b5-d67a-46b8-afd5-c186280b7c59\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d09b86c09092aeb2906ebe0ce4305787e34fb1492e4db553108c7e9e489e946e/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.294882 master-0 kubenswrapper[27819]: I0319 09:47:58.294850 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.295142 master-0 kubenswrapper[27819]: I0319 09:47:58.295087 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.300216 master-0 kubenswrapper[27819]: I0319 09:47:58.300176 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb1b49e8-0aab-4155-bc5d-b0662b950a56-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.308129 master-0 kubenswrapper[27819]: I0319 09:47:58.308086 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ccvk\" (UniqueName: \"kubernetes.io/projected/fb1b49e8-0aab-4155-bc5d-b0662b950a56-kube-api-access-4ccvk\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:47:58.512991 master-0 kubenswrapper[27819]: I0319 09:47:58.512882 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 19 09:47:58.670562 master-0 kubenswrapper[27819]: I0319 09:47:58.670488 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf7b941a-b73f-4273-98e0-d29ed2bad5d7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^dbeccf14-30e2-41b1-a605-c56ad2b46d35\") pod \"ovsdbserver-nb-0\" (UID: \"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac\") " pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:58.954250 master-0 kubenswrapper[27819]: I0319 09:47:58.954115 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 19 09:47:59.019452 master-0 kubenswrapper[27819]: W0319 09:47:59.019365 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3963c46c_0e6f_4a21_9719_469a187d3100.slice/crio-1d70371fb05b564d6ad5f437c4b3300b11413ee8a3faf95b03c0461612969d60 WatchSource:0}: Error finding container 1d70371fb05b564d6ad5f437c4b3300b11413ee8a3faf95b03c0461612969d60: Status 404 returned error can't find the container with id 1d70371fb05b564d6ad5f437c4b3300b11413ee8a3faf95b03c0461612969d60 Mar 19 09:47:59.584097 master-0 kubenswrapper[27819]: I0319 09:47:59.584041 27819 generic.go:334] "Generic (PLEG): container finished" podID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerID="1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496" exitCode=0 Mar 19 09:47:59.584581 master-0 kubenswrapper[27819]: I0319 09:47:59.584135 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" event={"ID":"073d89ae-4524-4fda-87d1-bdd81ef69236","Type":"ContainerDied","Data":"1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496"} Mar 19 09:47:59.590946 master-0 kubenswrapper[27819]: I0319 09:47:59.590873 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" event={"ID":"5279e013-1d5b-4c59-8deb-9fb1cc70212c","Type":"ContainerStarted","Data":"4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f"} Mar 19 09:47:59.595782 master-0 kubenswrapper[27819]: I0319 09:47:59.593744 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3963c46c-0e6f-4a21-9719-469a187d3100","Type":"ContainerStarted","Data":"1d70371fb05b564d6ad5f437c4b3300b11413ee8a3faf95b03c0461612969d60"} Mar 19 09:47:59.599437 master-0 kubenswrapper[27819]: I0319 09:47:59.599380 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" event={"ID":"ecc41f78-40cd-4fce-8749-2b3d239d18cb","Type":"ContainerStarted","Data":"5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4"} Mar 19 09:47:59.599613 master-0 kubenswrapper[27819]: I0319 09:47:59.599533 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" podUID="ecc41f78-40cd-4fce-8749-2b3d239d18cb" containerName="init" containerID="cri-o://5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4" gracePeriod=10 Mar 19 09:47:59.602709 master-0 kubenswrapper[27819]: I0319 09:47:59.602671 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" event={"ID":"9759090c-c277-4412-93dc-6bc7da2985c0","Type":"ContainerStarted","Data":"6b9775e4c42b3e62b56a1af9cfd0bab6b6ebe79a381a503ec4ed5d2578fcc7ab"} Mar 19 09:47:59.614274 master-0 kubenswrapper[27819]: I0319 09:47:59.609576 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 19 09:47:59.724502 master-0 kubenswrapper[27819]: W0319 09:47:59.723117 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63768452_d82b_4b66_a5ed_dcc87ddac4f6.slice/crio-8bb97d79d6e006dfe81d7cc5b92bf590b01db39833e9664df164ba20ac090589 WatchSource:0}: Error finding container 8bb97d79d6e006dfe81d7cc5b92bf590b01db39833e9664df164ba20ac090589: Status 404 returned error can't find the container with id 8bb97d79d6e006dfe81d7cc5b92bf590b01db39833e9664df164ba20ac090589 Mar 19 09:47:59.856794 master-0 kubenswrapper[27819]: E0319 09:47:59.856725 27819 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 19 09:47:59.856794 master-0 kubenswrapper[27819]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/9759090c-c277-4412-93dc-6bc7da2985c0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 09:47:59.856794 master-0 kubenswrapper[27819]: > podSandboxID="0265aa058aa7ba08cf003a2726f73baa81afb0acbd53faba51bd828c6af51ca0" Mar 19 09:47:59.857028 master-0 kubenswrapper[27819]: E0319 09:47:59.856962 27819 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 19 09:47:59.857028 master-0 kubenswrapper[27819]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d7h64dhb8hb8h587h59ch664h5c7h56dh67ch657h657h5fbh5chd8h9hcfh645h594h59ch565h669h648h5d5h8ch597h58bhd5h6fh67dh589hd4q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5k8pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-76849d6659-hgs6b_openstack(9759090c-c277-4412-93dc-6bc7da2985c0): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/9759090c-c277-4412-93dc-6bc7da2985c0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 19 09:47:59.857028 master-0 kubenswrapper[27819]: > logger="UnhandledError" Mar 19 09:47:59.858290 master-0 kubenswrapper[27819]: E0319 09:47:59.858241 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/9759090c-c277-4412-93dc-6bc7da2985c0/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" Mar 19 09:47:59.976565 master-0 kubenswrapper[27819]: I0319 09:47:59.976416 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c9527d9a-d447-4bb3-9b52-f9bcc7c0851a\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a14855b5-d67a-46b8-afd5-c186280b7c59\") pod \"ovsdbserver-sb-0\" (UID: \"fb1b49e8-0aab-4155-bc5d-b0662b950a56\") " pod="openstack/ovsdbserver-sb-0" Mar 19 09:48:00.181225 master-0 kubenswrapper[27819]: I0319 09:48:00.181166 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 19 09:48:00.458041 master-0 kubenswrapper[27819]: I0319 09:48:00.454690 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:48:00.523400 master-0 kubenswrapper[27819]: I0319 09:48:00.522146 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 19 09:48:00.523400 master-0 kubenswrapper[27819]: I0319 09:48:00.522347 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:48:00.583572 master-0 kubenswrapper[27819]: I0319 09:48:00.583132 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 19 09:48:00.591692 master-0 kubenswrapper[27819]: I0319 09:48:00.591639 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jlp5"] Mar 19 09:48:00.600320 master-0 kubenswrapper[27819]: I0319 09:48:00.600240 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dftqw\" (UniqueName: \"kubernetes.io/projected/5279e013-1d5b-4c59-8deb-9fb1cc70212c-kube-api-access-dftqw\") pod \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " Mar 19 09:48:00.600491 master-0 kubenswrapper[27819]: I0319 09:48:00.600412 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-config\") pod \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " Mar 19 09:48:00.600491 master-0 kubenswrapper[27819]: I0319 09:48:00.600471 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-dns-svc\") pod \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\" (UID: \"5279e013-1d5b-4c59-8deb-9fb1cc70212c\") " Mar 19 09:48:00.604677 master-0 kubenswrapper[27819]: I0319 09:48:00.604620 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5279e013-1d5b-4c59-8deb-9fb1cc70212c-kube-api-access-dftqw" (OuterVolumeSpecName: "kube-api-access-dftqw") pod "5279e013-1d5b-4c59-8deb-9fb1cc70212c" (UID: "5279e013-1d5b-4c59-8deb-9fb1cc70212c"). InnerVolumeSpecName "kube-api-access-dftqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:00.612571 master-0 kubenswrapper[27819]: I0319 09:48:00.612481 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jlp5" event={"ID":"0a614f48-076d-402e-8eec-10df235bb1b8","Type":"ContainerStarted","Data":"5a595ee1cb8c92ada50b5856a556614c3813346216fce7a44e2831e69cdab31e"} Mar 19 09:48:00.616211 master-0 kubenswrapper[27819]: I0319 09:48:00.616088 27819 generic.go:334] "Generic (PLEG): container finished" podID="9759090c-c277-4412-93dc-6bc7da2985c0" containerID="6b9775e4c42b3e62b56a1af9cfd0bab6b6ebe79a381a503ec4ed5d2578fcc7ab" exitCode=0 Mar 19 09:48:00.617218 master-0 kubenswrapper[27819]: I0319 09:48:00.617177 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" event={"ID":"9759090c-c277-4412-93dc-6bc7da2985c0","Type":"ContainerDied","Data":"6b9775e4c42b3e62b56a1af9cfd0bab6b6ebe79a381a503ec4ed5d2578fcc7ab"} Mar 19 09:48:00.623588 master-0 kubenswrapper[27819]: I0319 09:48:00.623490 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef67f907-aead-43e3-aa5f-3a4f7887cf9c","Type":"ContainerStarted","Data":"c33c451760227ff270bcd39295d83f54c91dc95f4c70fc1915757c4e5acc1f89"} Mar 19 09:48:00.627426 master-0 kubenswrapper[27819]: I0319 09:48:00.626025 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5279e013-1d5b-4c59-8deb-9fb1cc70212c" (UID: "5279e013-1d5b-4c59-8deb-9fb1cc70212c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:00.627426 master-0 kubenswrapper[27819]: I0319 09:48:00.626231 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb4431ae-26be-48d2-a988-bcc22db96846","Type":"ContainerStarted","Data":"39b300769eaa3d48d864cb9c149a89e421201f17fa45eca10007d537d191f7c5"} Mar 19 09:48:00.633736 master-0 kubenswrapper[27819]: I0319 09:48:00.633668 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" event={"ID":"073d89ae-4524-4fda-87d1-bdd81ef69236","Type":"ContainerStarted","Data":"638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5"} Mar 19 09:48:00.633986 master-0 kubenswrapper[27819]: I0319 09:48:00.633945 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:48:00.638119 master-0 kubenswrapper[27819]: I0319 09:48:00.638074 27819 generic.go:334] "Generic (PLEG): container finished" podID="5279e013-1d5b-4c59-8deb-9fb1cc70212c" containerID="4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f" exitCode=0 Mar 19 09:48:00.638250 master-0 kubenswrapper[27819]: I0319 09:48:00.638205 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" event={"ID":"5279e013-1d5b-4c59-8deb-9fb1cc70212c","Type":"ContainerDied","Data":"4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f"} Mar 19 09:48:00.638306 master-0 kubenswrapper[27819]: I0319 09:48:00.638278 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" event={"ID":"5279e013-1d5b-4c59-8deb-9fb1cc70212c","Type":"ContainerDied","Data":"418668ec1e2e104968acdb5be2c8cde4fbb6e6276cf552c6153572830b46b2d5"} Mar 19 09:48:00.638306 master-0 kubenswrapper[27819]: I0319 09:48:00.638296 27819 scope.go:117] "RemoveContainer" containerID="4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f" Mar 19 09:48:00.638498 master-0 kubenswrapper[27819]: I0319 09:48:00.638475 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-config" (OuterVolumeSpecName: "config") pod "5279e013-1d5b-4c59-8deb-9fb1cc70212c" (UID: "5279e013-1d5b-4c59-8deb-9fb1cc70212c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:00.639364 master-0 kubenswrapper[27819]: I0319 09:48:00.638509 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8476fd89bc-vrtcp" Mar 19 09:48:00.678647 master-0 kubenswrapper[27819]: I0319 09:48:00.665246 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"63768452-d82b-4b66-a5ed-dcc87ddac4f6","Type":"ContainerStarted","Data":"8bb97d79d6e006dfe81d7cc5b92bf590b01db39833e9664df164ba20ac090589"} Mar 19 09:48:00.678647 master-0 kubenswrapper[27819]: I0319 09:48:00.673321 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" Mar 19 09:48:00.678647 master-0 kubenswrapper[27819]: I0319 09:48:00.673984 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" event={"ID":"ecc41f78-40cd-4fce-8749-2b3d239d18cb","Type":"ContainerDied","Data":"5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4"} Mar 19 09:48:00.678647 master-0 kubenswrapper[27819]: I0319 09:48:00.676978 27819 generic.go:334] "Generic (PLEG): container finished" podID="ecc41f78-40cd-4fce-8749-2b3d239d18cb" containerID="5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4" exitCode=0 Mar 19 09:48:00.678647 master-0 kubenswrapper[27819]: I0319 09:48:00.677076 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685c76cf85-ssxws" event={"ID":"ecc41f78-40cd-4fce-8749-2b3d239d18cb","Type":"ContainerDied","Data":"6e6860c4e6e3052b6b34f117b20ae6ba92e7ea26eeb4151ebc2cd03c74996199"} Mar 19 09:48:00.689886 master-0 kubenswrapper[27819]: I0319 09:48:00.689760 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" podStartSLOduration=3.750074621 podStartE2EDuration="19.689736851s" podCreationTimestamp="2026-03-19 09:47:41 +0000 UTC" firstStartedPulling="2026-03-19 09:47:43.239833484 +0000 UTC m=+848.161411176" lastFinishedPulling="2026-03-19 09:47:59.179495714 +0000 UTC m=+864.101073406" observedRunningTime="2026-03-19 09:48:00.67284127 +0000 UTC m=+865.594418972" watchObservedRunningTime="2026-03-19 09:48:00.689736851 +0000 UTC m=+865.611314553" Mar 19 09:48:00.690097 master-0 kubenswrapper[27819]: I0319 09:48:00.690043 27819 scope.go:117] "RemoveContainer" containerID="4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f" Mar 19 09:48:00.695818 master-0 kubenswrapper[27819]: E0319 09:48:00.695732 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f\": container with ID starting with 4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f not found: ID does not exist" containerID="4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f" Mar 19 09:48:00.695818 master-0 kubenswrapper[27819]: I0319 09:48:00.695782 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f"} err="failed to get container status \"4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f\": rpc error: code = NotFound desc = could not find container \"4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f\": container with ID starting with 4c5e92e6bf2fbd787ce062f1c2f4e01e74452c6180efd2923a75efb476d0599f not found: ID does not exist" Mar 19 09:48:00.695818 master-0 kubenswrapper[27819]: I0319 09:48:00.695812 27819 scope.go:117] "RemoveContainer" containerID="5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4" Mar 19 09:48:00.702440 master-0 kubenswrapper[27819]: I0319 09:48:00.702392 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc41f78-40cd-4fce-8749-2b3d239d18cb-config\") pod \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " Mar 19 09:48:00.702586 master-0 kubenswrapper[27819]: I0319 09:48:00.702516 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxfd\" (UniqueName: \"kubernetes.io/projected/ecc41f78-40cd-4fce-8749-2b3d239d18cb-kube-api-access-nsxfd\") pod \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\" (UID: \"ecc41f78-40cd-4fce-8749-2b3d239d18cb\") " Mar 19 09:48:00.705057 master-0 kubenswrapper[27819]: I0319 09:48:00.704930 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:00.705057 master-0 kubenswrapper[27819]: I0319 09:48:00.705021 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5279e013-1d5b-4c59-8deb-9fb1cc70212c-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:00.705057 master-0 kubenswrapper[27819]: I0319 09:48:00.705035 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dftqw\" (UniqueName: \"kubernetes.io/projected/5279e013-1d5b-4c59-8deb-9fb1cc70212c-kube-api-access-dftqw\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:00.706828 master-0 kubenswrapper[27819]: I0319 09:48:00.706343 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc41f78-40cd-4fce-8749-2b3d239d18cb-kube-api-access-nsxfd" (OuterVolumeSpecName: "kube-api-access-nsxfd") pod "ecc41f78-40cd-4fce-8749-2b3d239d18cb" (UID: "ecc41f78-40cd-4fce-8749-2b3d239d18cb"). InnerVolumeSpecName "kube-api-access-nsxfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:00.729359 master-0 kubenswrapper[27819]: I0319 09:48:00.729327 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 19 09:48:00.755584 master-0 kubenswrapper[27819]: I0319 09:48:00.753730 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc41f78-40cd-4fce-8749-2b3d239d18cb-config" (OuterVolumeSpecName: "config") pod "ecc41f78-40cd-4fce-8749-2b3d239d18cb" (UID: "ecc41f78-40cd-4fce-8749-2b3d239d18cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:00.791694 master-0 kubenswrapper[27819]: I0319 09:48:00.791650 27819 scope.go:117] "RemoveContainer" containerID="5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4" Mar 19 09:48:00.792185 master-0 kubenswrapper[27819]: E0319 09:48:00.792111 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4\": container with ID starting with 5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4 not found: ID does not exist" containerID="5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4" Mar 19 09:48:00.792185 master-0 kubenswrapper[27819]: I0319 09:48:00.792144 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4"} err="failed to get container status \"5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4\": rpc error: code = NotFound desc = could not find container \"5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4\": container with ID starting with 5e9cb55b390b4a83a2e97afcc3f4c17297620d66f693e996c348221ab455bab4 not found: ID does not exist" Mar 19 09:48:00.808972 master-0 kubenswrapper[27819]: I0319 09:48:00.808930 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc41f78-40cd-4fce-8749-2b3d239d18cb-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:00.808972 master-0 kubenswrapper[27819]: I0319 09:48:00.808972 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsxfd\" (UniqueName: \"kubernetes.io/projected/ecc41f78-40cd-4fce-8749-2b3d239d18cb-kube-api-access-nsxfd\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:00.810588 master-0 kubenswrapper[27819]: I0319 09:48:00.810556 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-vrtcp"] Mar 19 09:48:00.820825 master-0 kubenswrapper[27819]: I0319 09:48:00.820780 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8476fd89bc-vrtcp"] Mar 19 09:48:00.896560 master-0 kubenswrapper[27819]: I0319 09:48:00.895265 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 19 09:48:00.950367 master-0 kubenswrapper[27819]: I0319 09:48:00.950187 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 19 09:48:00.960742 master-0 kubenswrapper[27819]: W0319 09:48:00.960170 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1b49e8_0aab_4155_bc5d_b0662b950a56.slice/crio-bcd5f1c16165e011be5ff79b3a14ec5ff577b0f92b2250c378d78b2244be5b12 WatchSource:0}: Error finding container bcd5f1c16165e011be5ff79b3a14ec5ff577b0f92b2250c378d78b2244be5b12: Status 404 returned error can't find the container with id bcd5f1c16165e011be5ff79b3a14ec5ff577b0f92b2250c378d78b2244be5b12 Mar 19 09:48:01.294186 master-0 kubenswrapper[27819]: I0319 09:48:01.294110 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5279e013-1d5b-4c59-8deb-9fb1cc70212c" path="/var/lib/kubelet/pods/5279e013-1d5b-4c59-8deb-9fb1cc70212c/volumes" Mar 19 09:48:01.391784 master-0 kubenswrapper[27819]: I0319 09:48:01.391619 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-ssxws"] Mar 19 09:48:01.406330 master-0 kubenswrapper[27819]: I0319 09:48:01.406285 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685c76cf85-ssxws"] Mar 19 09:48:01.696090 master-0 kubenswrapper[27819]: I0319 09:48:01.695974 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" event={"ID":"9759090c-c277-4412-93dc-6bc7da2985c0","Type":"ContainerStarted","Data":"13ad4b14e65293cf8aad28a82153a8bd95d72a0290c89f0895c096868af72c18"} Mar 19 09:48:01.696090 master-0 kubenswrapper[27819]: I0319 09:48:01.696064 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:48:01.697945 master-0 kubenswrapper[27819]: I0319 09:48:01.697917 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fb1b49e8-0aab-4155-bc5d-b0662b950a56","Type":"ContainerStarted","Data":"bcd5f1c16165e011be5ff79b3a14ec5ff577b0f92b2250c378d78b2244be5b12"} Mar 19 09:48:01.701371 master-0 kubenswrapper[27819]: I0319 09:48:01.701337 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b77f57a3-97e7-49e5-b10d-e352b75a0655","Type":"ContainerStarted","Data":"531de7e48cd6cbfd1f7b4f8a1a47b360e9a825d54c8f75a9d14627274aaee4c6"} Mar 19 09:48:01.707721 master-0 kubenswrapper[27819]: I0319 09:48:01.707483 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac","Type":"ContainerStarted","Data":"fee34a4d8b4a86f3b9ed53c296075d82a5cc99572690305e9163bc1cc423aedf"} Mar 19 09:48:01.821557 master-0 kubenswrapper[27819]: I0319 09:48:01.821468 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" podStartSLOduration=5.888261091 podStartE2EDuration="21.821446908s" podCreationTimestamp="2026-03-19 09:47:40 +0000 UTC" firstStartedPulling="2026-03-19 09:47:43.255036165 +0000 UTC m=+848.176613857" lastFinishedPulling="2026-03-19 09:47:59.188221982 +0000 UTC m=+864.109799674" observedRunningTime="2026-03-19 09:48:01.800395193 +0000 UTC m=+866.721972885" watchObservedRunningTime="2026-03-19 09:48:01.821446908 +0000 UTC m=+866.743024600" Mar 19 09:48:02.215356 master-0 kubenswrapper[27819]: I0319 09:48:02.215303 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-kgb6x"] Mar 19 09:48:03.292705 master-0 kubenswrapper[27819]: I0319 09:48:03.292639 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc41f78-40cd-4fce-8749-2b3d239d18cb" path="/var/lib/kubelet/pods/ecc41f78-40cd-4fce-8749-2b3d239d18cb/volumes" Mar 19 09:48:03.745940 master-0 kubenswrapper[27819]: I0319 09:48:03.745882 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kgb6x" event={"ID":"4a239e6e-3d2d-44d3-b11b-0636618a1719","Type":"ContainerStarted","Data":"010dd773b368028f1c7b05d27f1473a9faa460aca34641c587146aeaade556d1"} Mar 19 09:48:06.069826 master-0 kubenswrapper[27819]: I0319 09:48:06.069751 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:48:06.535645 master-0 kubenswrapper[27819]: I0319 09:48:06.535552 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:48:06.636932 master-0 kubenswrapper[27819]: I0319 09:48:06.636774 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hgs6b"] Mar 19 09:48:06.787025 master-0 kubenswrapper[27819]: I0319 09:48:06.786873 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" containerName="dnsmasq-dns" containerID="cri-o://13ad4b14e65293cf8aad28a82153a8bd95d72a0290c89f0895c096868af72c18" gracePeriod=10 Mar 19 09:48:08.807609 master-0 kubenswrapper[27819]: I0319 09:48:08.807537 27819 generic.go:334] "Generic (PLEG): container finished" podID="9759090c-c277-4412-93dc-6bc7da2985c0" containerID="13ad4b14e65293cf8aad28a82153a8bd95d72a0290c89f0895c096868af72c18" exitCode=0 Mar 19 09:48:08.808330 master-0 kubenswrapper[27819]: I0319 09:48:08.807600 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" event={"ID":"9759090c-c277-4412-93dc-6bc7da2985c0","Type":"ContainerDied","Data":"13ad4b14e65293cf8aad28a82153a8bd95d72a0290c89f0895c096868af72c18"} Mar 19 09:48:09.995954 master-0 kubenswrapper[27819]: I0319 09:48:09.987281 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:48:10.151569 master-0 kubenswrapper[27819]: I0319 09:48:10.147851 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-dns-svc\") pod \"9759090c-c277-4412-93dc-6bc7da2985c0\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " Mar 19 09:48:10.151569 master-0 kubenswrapper[27819]: I0319 09:48:10.148029 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-config\") pod \"9759090c-c277-4412-93dc-6bc7da2985c0\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " Mar 19 09:48:10.151569 master-0 kubenswrapper[27819]: I0319 09:48:10.148161 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k8pn\" (UniqueName: \"kubernetes.io/projected/9759090c-c277-4412-93dc-6bc7da2985c0-kube-api-access-5k8pn\") pod \"9759090c-c277-4412-93dc-6bc7da2985c0\" (UID: \"9759090c-c277-4412-93dc-6bc7da2985c0\") " Mar 19 09:48:10.151942 master-0 kubenswrapper[27819]: I0319 09:48:10.151830 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9759090c-c277-4412-93dc-6bc7da2985c0-kube-api-access-5k8pn" (OuterVolumeSpecName: "kube-api-access-5k8pn") pod "9759090c-c277-4412-93dc-6bc7da2985c0" (UID: "9759090c-c277-4412-93dc-6bc7da2985c0"). InnerVolumeSpecName "kube-api-access-5k8pn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:10.198007 master-0 kubenswrapper[27819]: I0319 09:48:10.197939 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9759090c-c277-4412-93dc-6bc7da2985c0" (UID: "9759090c-c277-4412-93dc-6bc7da2985c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:10.203123 master-0 kubenswrapper[27819]: I0319 09:48:10.202323 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-config" (OuterVolumeSpecName: "config") pod "9759090c-c277-4412-93dc-6bc7da2985c0" (UID: "9759090c-c277-4412-93dc-6bc7da2985c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:10.251613 master-0 kubenswrapper[27819]: I0319 09:48:10.251516 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k8pn\" (UniqueName: \"kubernetes.io/projected/9759090c-c277-4412-93dc-6bc7da2985c0-kube-api-access-5k8pn\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:10.251613 master-0 kubenswrapper[27819]: I0319 09:48:10.251576 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:10.251613 master-0 kubenswrapper[27819]: I0319 09:48:10.251587 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9759090c-c277-4412-93dc-6bc7da2985c0-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:10.828860 master-0 kubenswrapper[27819]: I0319 09:48:10.828719 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" event={"ID":"9759090c-c277-4412-93dc-6bc7da2985c0","Type":"ContainerDied","Data":"0265aa058aa7ba08cf003a2726f73baa81afb0acbd53faba51bd828c6af51ca0"} Mar 19 09:48:10.828860 master-0 kubenswrapper[27819]: I0319 09:48:10.828790 27819 scope.go:117] "RemoveContainer" containerID="13ad4b14e65293cf8aad28a82153a8bd95d72a0290c89f0895c096868af72c18" Mar 19 09:48:10.829519 master-0 kubenswrapper[27819]: I0319 09:48:10.829483 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76849d6659-hgs6b" Mar 19 09:48:11.411934 master-0 kubenswrapper[27819]: I0319 09:48:11.411829 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hgs6b"] Mar 19 09:48:11.434624 master-0 kubenswrapper[27819]: I0319 09:48:11.434544 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76849d6659-hgs6b"] Mar 19 09:48:11.765746 master-0 kubenswrapper[27819]: I0319 09:48:11.765707 27819 scope.go:117] "RemoveContainer" containerID="6b9775e4c42b3e62b56a1af9cfd0bab6b6ebe79a381a503ec4ed5d2578fcc7ab" Mar 19 09:48:12.863593 master-0 kubenswrapper[27819]: I0319 09:48:12.862158 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb4431ae-26be-48d2-a988-bcc22db96846","Type":"ContainerStarted","Data":"ec1c71c1aaa341bc47fb0fdbc00719acc7f1c73ee11e7888a02a3bd17a264631"} Mar 19 09:48:12.867811 master-0 kubenswrapper[27819]: I0319 09:48:12.867698 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b77f57a3-97e7-49e5-b10d-e352b75a0655","Type":"ContainerStarted","Data":"f6c785f6280be3711fdbbcd8496ef50719a89116adc55c5ad7a9f5d88d9a542c"} Mar 19 09:48:12.868103 master-0 kubenswrapper[27819]: I0319 09:48:12.867827 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 19 09:48:12.870023 master-0 kubenswrapper[27819]: I0319 09:48:12.869991 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kgb6x" event={"ID":"4a239e6e-3d2d-44d3-b11b-0636618a1719","Type":"ContainerStarted","Data":"2bf1d4359e9355d9ece95176f69e53f5dc1547753604ee8adc80e00af9882b8e"} Mar 19 09:48:12.871466 master-0 kubenswrapper[27819]: I0319 09:48:12.871439 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"63768452-d82b-4b66-a5ed-dcc87ddac4f6","Type":"ContainerStarted","Data":"70b91f087fdc5b9c494e7f5b8bd564fa22730ac45df57a7b304a24e6adb30366"} Mar 19 09:48:12.876945 master-0 kubenswrapper[27819]: I0319 09:48:12.876874 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac","Type":"ContainerStarted","Data":"a61013d63218497880b36e3c75ec0f867fc4942fe6a3ba0b95395a61743ef4ef"} Mar 19 09:48:12.882374 master-0 kubenswrapper[27819]: I0319 09:48:12.881979 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jlp5" event={"ID":"0a614f48-076d-402e-8eec-10df235bb1b8","Type":"ContainerStarted","Data":"a17fcea23e22854a8132f60cc27ccfa05911cc7be79ff215012efc01b71214f9"} Mar 19 09:48:12.882986 master-0 kubenswrapper[27819]: I0319 09:48:12.882945 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8jlp5" Mar 19 09:48:13.293242 master-0 kubenswrapper[27819]: I0319 09:48:13.293042 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" path="/var/lib/kubelet/pods/9759090c-c277-4412-93dc-6bc7da2985c0/volumes" Mar 19 09:48:13.896428 master-0 kubenswrapper[27819]: I0319 09:48:13.896360 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fb1b49e8-0aab-4155-bc5d-b0662b950a56","Type":"ContainerStarted","Data":"757a0b5e4610fc3c3bc7dfd8e6155de80a2f86e67e930bdf4c55fc8c6fefc1a1"} Mar 19 09:48:13.899032 master-0 kubenswrapper[27819]: I0319 09:48:13.898988 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3963c46c-0e6f-4a21-9719-469a187d3100","Type":"ContainerStarted","Data":"c9daed927127e007d399472d2e6693b92d54bd1b477358d96e2cceeedb3cf1d4"} Mar 19 09:48:13.902194 master-0 kubenswrapper[27819]: I0319 09:48:13.902145 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef67f907-aead-43e3-aa5f-3a4f7887cf9c","Type":"ContainerStarted","Data":"b8199541c61b63cfcca19db0d98e51dcad98179d82985a5ed651b4f8160583c3"} Mar 19 09:48:14.081725 master-0 kubenswrapper[27819]: I0319 09:48:14.080934 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=19.325881078 podStartE2EDuration="29.080906469s" podCreationTimestamp="2026-03-19 09:47:45 +0000 UTC" firstStartedPulling="2026-03-19 09:48:00.791636802 +0000 UTC m=+865.713214494" lastFinishedPulling="2026-03-19 09:48:10.546662193 +0000 UTC m=+875.468239885" observedRunningTime="2026-03-19 09:48:14.073063485 +0000 UTC m=+878.994641197" watchObservedRunningTime="2026-03-19 09:48:14.080906469 +0000 UTC m=+879.002484161" Mar 19 09:48:14.085931 master-0 kubenswrapper[27819]: I0319 09:48:14.085868 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8jlp5" podStartSLOduration=11.837246831 podStartE2EDuration="23.085848244s" podCreationTimestamp="2026-03-19 09:47:51 +0000 UTC" firstStartedPulling="2026-03-19 09:48:00.545366931 +0000 UTC m=+865.466944623" lastFinishedPulling="2026-03-19 09:48:11.793968344 +0000 UTC m=+876.715546036" observedRunningTime="2026-03-19 09:48:14.034123322 +0000 UTC m=+878.955701034" watchObservedRunningTime="2026-03-19 09:48:14.085848244 +0000 UTC m=+879.007425936" Mar 19 09:48:14.920069 master-0 kubenswrapper[27819]: I0319 09:48:14.920013 27819 generic.go:334] "Generic (PLEG): container finished" podID="4a239e6e-3d2d-44d3-b11b-0636618a1719" containerID="2bf1d4359e9355d9ece95176f69e53f5dc1547753604ee8adc80e00af9882b8e" exitCode=0 Mar 19 09:48:14.920781 master-0 kubenswrapper[27819]: I0319 09:48:14.920176 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kgb6x" event={"ID":"4a239e6e-3d2d-44d3-b11b-0636618a1719","Type":"ContainerDied","Data":"2bf1d4359e9355d9ece95176f69e53f5dc1547753604ee8adc80e00af9882b8e"} Mar 19 09:48:15.931658 master-0 kubenswrapper[27819]: I0319 09:48:15.931602 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kgb6x" event={"ID":"4a239e6e-3d2d-44d3-b11b-0636618a1719","Type":"ContainerStarted","Data":"1bf715d8f42d3ebea4000be9dcf441ee8a611c72b1cbaa7f496fe13fc196bf46"} Mar 19 09:48:17.963137 master-0 kubenswrapper[27819]: I0319 09:48:17.963059 27819 generic.go:334] "Generic (PLEG): container finished" podID="63768452-d82b-4b66-a5ed-dcc87ddac4f6" containerID="70b91f087fdc5b9c494e7f5b8bd564fa22730ac45df57a7b304a24e6adb30366" exitCode=0 Mar 19 09:48:17.963895 master-0 kubenswrapper[27819]: I0319 09:48:17.963120 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"63768452-d82b-4b66-a5ed-dcc87ddac4f6","Type":"ContainerDied","Data":"70b91f087fdc5b9c494e7f5b8bd564fa22730ac45df57a7b304a24e6adb30366"} Mar 19 09:48:17.966647 master-0 kubenswrapper[27819]: I0319 09:48:17.966571 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"9cf4cd9a-cea4-4fc7-869b-5d0e4ca0fcac","Type":"ContainerStarted","Data":"5c6d8206bdb13b89766b76efa167ef293c0bd6d7f69354f2b5bd01907300aa6c"} Mar 19 09:48:17.970060 master-0 kubenswrapper[27819]: I0319 09:48:17.970019 27819 generic.go:334] "Generic (PLEG): container finished" podID="cb4431ae-26be-48d2-a988-bcc22db96846" containerID="ec1c71c1aaa341bc47fb0fdbc00719acc7f1c73ee11e7888a02a3bd17a264631" exitCode=0 Mar 19 09:48:17.970677 master-0 kubenswrapper[27819]: I0319 09:48:17.970617 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb4431ae-26be-48d2-a988-bcc22db96846","Type":"ContainerDied","Data":"ec1c71c1aaa341bc47fb0fdbc00719acc7f1c73ee11e7888a02a3bd17a264631"} Mar 19 09:48:17.973430 master-0 kubenswrapper[27819]: I0319 09:48:17.973390 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"fb1b49e8-0aab-4155-bc5d-b0662b950a56","Type":"ContainerStarted","Data":"8203514fd3bb4fd4127f72e7e49f908636763ca690503f103edced6acae4f81c"} Mar 19 09:48:17.977624 master-0 kubenswrapper[27819]: I0319 09:48:17.977522 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-kgb6x" event={"ID":"4a239e6e-3d2d-44d3-b11b-0636618a1719","Type":"ContainerStarted","Data":"cde38cf999df2039f6b6c9af5be2acffad18bb38574fdea04eae17b80e662b9f"} Mar 19 09:48:17.977869 master-0 kubenswrapper[27819]: I0319 09:48:17.977836 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:48:18.011858 master-0 kubenswrapper[27819]: I0319 09:48:18.011773 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.000686879 podStartE2EDuration="26.011754818s" podCreationTimestamp="2026-03-19 09:47:52 +0000 UTC" firstStartedPulling="2026-03-19 09:48:00.964774137 +0000 UTC m=+865.886351829" lastFinishedPulling="2026-03-19 09:48:16.975842076 +0000 UTC m=+881.897419768" observedRunningTime="2026-03-19 09:48:18.009016454 +0000 UTC m=+882.930594166" watchObservedRunningTime="2026-03-19 09:48:18.011754818 +0000 UTC m=+882.933332500" Mar 19 09:48:18.058580 master-0 kubenswrapper[27819]: I0319 09:48:18.058485 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=10.001980725 podStartE2EDuration="26.058463683s" podCreationTimestamp="2026-03-19 09:47:52 +0000 UTC" firstStartedPulling="2026-03-19 09:48:00.900506583 +0000 UTC m=+865.822084265" lastFinishedPulling="2026-03-19 09:48:16.956989521 +0000 UTC m=+881.878567223" observedRunningTime="2026-03-19 09:48:18.054463974 +0000 UTC m=+882.976041676" watchObservedRunningTime="2026-03-19 09:48:18.058463683 +0000 UTC m=+882.980041375" Mar 19 09:48:18.182499 master-0 kubenswrapper[27819]: I0319 09:48:18.182457 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 19 09:48:18.229331 master-0 kubenswrapper[27819]: I0319 09:48:18.228426 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 19 09:48:18.261774 master-0 kubenswrapper[27819]: I0319 09:48:18.261654 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-kgb6x" podStartSLOduration=18.27179156 podStartE2EDuration="27.261627178s" podCreationTimestamp="2026-03-19 09:47:51 +0000 UTC" firstStartedPulling="2026-03-19 09:48:02.804198048 +0000 UTC m=+867.725775740" lastFinishedPulling="2026-03-19 09:48:11.794033656 +0000 UTC m=+876.715611358" observedRunningTime="2026-03-19 09:48:18.091949097 +0000 UTC m=+883.013526779" watchObservedRunningTime="2026-03-19 09:48:18.261627178 +0000 UTC m=+883.183204880" Mar 19 09:48:18.955631 master-0 kubenswrapper[27819]: I0319 09:48:18.955572 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 19 09:48:18.988367 master-0 kubenswrapper[27819]: I0319 09:48:18.988302 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cb4431ae-26be-48d2-a988-bcc22db96846","Type":"ContainerStarted","Data":"a1b793232ecbaea577b6046ef38e3dfce491bcb22630ec9ac1260e77d44428e4"} Mar 19 09:48:18.990190 master-0 kubenswrapper[27819]: I0319 09:48:18.990151 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"63768452-d82b-4b66-a5ed-dcc87ddac4f6","Type":"ContainerStarted","Data":"a35b0d997831ea89f99cd710b8a97140ff12b786a32288bd3e5fd95191eaa1f2"} Mar 19 09:48:18.991164 master-0 kubenswrapper[27819]: I0319 09:48:18.991124 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 19 09:48:18.991164 master-0 kubenswrapper[27819]: I0319 09:48:18.991153 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:48:19.025746 master-0 kubenswrapper[27819]: I0319 09:48:19.020883 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.806916108 podStartE2EDuration="34.020866969s" podCreationTimestamp="2026-03-19 09:47:45 +0000 UTC" firstStartedPulling="2026-03-19 09:48:00.524658216 +0000 UTC m=+865.446235908" lastFinishedPulling="2026-03-19 09:48:07.738609067 +0000 UTC m=+872.660186769" observedRunningTime="2026-03-19 09:48:19.010987179 +0000 UTC m=+883.932564881" watchObservedRunningTime="2026-03-19 09:48:19.020866969 +0000 UTC m=+883.942444661" Mar 19 09:48:19.031298 master-0 kubenswrapper[27819]: I0319 09:48:19.031255 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 19 09:48:19.048174 master-0 kubenswrapper[27819]: I0319 09:48:19.048094 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.506675097 podStartE2EDuration="36.048074901s" podCreationTimestamp="2026-03-19 09:47:43 +0000 UTC" firstStartedPulling="2026-03-19 09:47:59.727620243 +0000 UTC m=+864.649197935" lastFinishedPulling="2026-03-19 09:48:05.269020047 +0000 UTC m=+870.190597739" observedRunningTime="2026-03-19 09:48:19.040977117 +0000 UTC m=+883.962554819" watchObservedRunningTime="2026-03-19 09:48:19.048074901 +0000 UTC m=+883.969652633" Mar 19 09:48:19.380130 master-0 kubenswrapper[27819]: I0319 09:48:19.380067 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-zlg8r"] Mar 19 09:48:19.380695 master-0 kubenswrapper[27819]: E0319 09:48:19.380663 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" containerName="dnsmasq-dns" Mar 19 09:48:19.380695 master-0 kubenswrapper[27819]: I0319 09:48:19.380690 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" containerName="dnsmasq-dns" Mar 19 09:48:19.380788 master-0 kubenswrapper[27819]: E0319 09:48:19.380741 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5279e013-1d5b-4c59-8deb-9fb1cc70212c" containerName="init" Mar 19 09:48:19.380788 master-0 kubenswrapper[27819]: I0319 09:48:19.380753 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="5279e013-1d5b-4c59-8deb-9fb1cc70212c" containerName="init" Mar 19 09:48:19.380788 master-0 kubenswrapper[27819]: E0319 09:48:19.380780 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc41f78-40cd-4fce-8749-2b3d239d18cb" containerName="init" Mar 19 09:48:19.380788 master-0 kubenswrapper[27819]: I0319 09:48:19.380788 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc41f78-40cd-4fce-8749-2b3d239d18cb" containerName="init" Mar 19 09:48:19.380974 master-0 kubenswrapper[27819]: E0319 09:48:19.380808 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" containerName="init" Mar 19 09:48:19.380974 master-0 kubenswrapper[27819]: I0319 09:48:19.380818 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" containerName="init" Mar 19 09:48:19.381086 master-0 kubenswrapper[27819]: I0319 09:48:19.381065 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc41f78-40cd-4fce-8749-2b3d239d18cb" containerName="init" Mar 19 09:48:19.381126 master-0 kubenswrapper[27819]: I0319 09:48:19.381108 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="9759090c-c277-4412-93dc-6bc7da2985c0" containerName="dnsmasq-dns" Mar 19 09:48:19.381159 master-0 kubenswrapper[27819]: I0319 09:48:19.381131 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="5279e013-1d5b-4c59-8deb-9fb1cc70212c" containerName="init" Mar 19 09:48:19.382342 master-0 kubenswrapper[27819]: I0319 09:48:19.382309 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.387731 master-0 kubenswrapper[27819]: I0319 09:48:19.387663 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 19 09:48:19.402925 master-0 kubenswrapper[27819]: I0319 09:48:19.402466 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-zlg8r"] Mar 19 09:48:19.414193 master-0 kubenswrapper[27819]: I0319 09:48:19.414135 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7cq7n"] Mar 19 09:48:19.415636 master-0 kubenswrapper[27819]: I0319 09:48:19.415586 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.418679 master-0 kubenswrapper[27819]: I0319 09:48:19.418611 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 19 09:48:19.432678 master-0 kubenswrapper[27819]: I0319 09:48:19.431371 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7cq7n"] Mar 19 09:48:19.462556 master-0 kubenswrapper[27819]: I0319 09:48:19.462417 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5253f244-cfca-4e85-b05d-4e1c589f459f-config\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.462690 master-0 kubenswrapper[27819]: I0319 09:48:19.462643 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5253f244-cfca-4e85-b05d-4e1c589f459f-ovs-rundir\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.462769 master-0 kubenswrapper[27819]: I0319 09:48:19.462707 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5253f244-cfca-4e85-b05d-4e1c589f459f-combined-ca-bundle\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.462821 master-0 kubenswrapper[27819]: I0319 09:48:19.462781 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5253f244-cfca-4e85-b05d-4e1c589f459f-ovn-rundir\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.462917 master-0 kubenswrapper[27819]: I0319 09:48:19.462889 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-ovsdbserver-sb\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.462969 master-0 kubenswrapper[27819]: I0319 09:48:19.462949 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-config\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.463107 master-0 kubenswrapper[27819]: I0319 09:48:19.463059 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5253f244-cfca-4e85-b05d-4e1c589f459f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.463107 master-0 kubenswrapper[27819]: I0319 09:48:19.463091 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xf7\" (UniqueName: \"kubernetes.io/projected/c532c07c-5664-4a96-ae4b-b754322407e8-kube-api-access-n2xf7\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.463200 master-0 kubenswrapper[27819]: I0319 09:48:19.463125 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-dns-svc\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.463200 master-0 kubenswrapper[27819]: I0319 09:48:19.463150 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72kc\" (UniqueName: \"kubernetes.io/projected/5253f244-cfca-4e85-b05d-4e1c589f459f-kube-api-access-q72kc\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.481932 master-0 kubenswrapper[27819]: I0319 09:48:19.479616 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 19 09:48:19.481932 master-0 kubenswrapper[27819]: I0319 09:48:19.479660 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.591483 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5253f244-cfca-4e85-b05d-4e1c589f459f-config\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.592494 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5253f244-cfca-4e85-b05d-4e1c589f459f-config\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.592592 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5253f244-cfca-4e85-b05d-4e1c589f459f-ovs-rundir\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.592679 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5253f244-cfca-4e85-b05d-4e1c589f459f-combined-ca-bundle\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.592723 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5253f244-cfca-4e85-b05d-4e1c589f459f-ovn-rundir\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.592870 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-ovsdbserver-sb\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.592957 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-config\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.592958 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5253f244-cfca-4e85-b05d-4e1c589f459f-ovn-rundir\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.593118 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5253f244-cfca-4e85-b05d-4e1c589f459f-ovs-rundir\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.593224 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5253f244-cfca-4e85-b05d-4e1c589f459f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.593259 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xf7\" (UniqueName: \"kubernetes.io/projected/c532c07c-5664-4a96-ae4b-b754322407e8-kube-api-access-n2xf7\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.593320 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72kc\" (UniqueName: \"kubernetes.io/projected/5253f244-cfca-4e85-b05d-4e1c589f459f-kube-api-access-q72kc\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.593410 master-0 kubenswrapper[27819]: I0319 09:48:19.593347 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-dns-svc\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.594204 master-0 kubenswrapper[27819]: I0319 09:48:19.593857 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-ovsdbserver-sb\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.594204 master-0 kubenswrapper[27819]: I0319 09:48:19.593915 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-config\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.595644 master-0 kubenswrapper[27819]: I0319 09:48:19.594818 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-dns-svc\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.598129 master-0 kubenswrapper[27819]: I0319 09:48:19.596325 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5253f244-cfca-4e85-b05d-4e1c589f459f-combined-ca-bundle\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.602672 master-0 kubenswrapper[27819]: I0319 09:48:19.598698 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5253f244-cfca-4e85-b05d-4e1c589f459f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.618569 master-0 kubenswrapper[27819]: I0319 09:48:19.615371 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72kc\" (UniqueName: \"kubernetes.io/projected/5253f244-cfca-4e85-b05d-4e1c589f459f-kube-api-access-q72kc\") pod \"ovn-controller-metrics-7cq7n\" (UID: \"5253f244-cfca-4e85-b05d-4e1c589f459f\") " pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.618569 master-0 kubenswrapper[27819]: I0319 09:48:19.617457 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xf7\" (UniqueName: \"kubernetes.io/projected/c532c07c-5664-4a96-ae4b-b754322407e8-kube-api-access-n2xf7\") pod \"dnsmasq-dns-65db7fd8ff-zlg8r\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.671698 master-0 kubenswrapper[27819]: I0319 09:48:19.671266 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-zlg8r"] Mar 19 09:48:19.672194 master-0 kubenswrapper[27819]: I0319 09:48:19.672024 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:19.691473 master-0 kubenswrapper[27819]: I0319 09:48:19.691407 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76f498f559-47hh4"] Mar 19 09:48:19.693128 master-0 kubenswrapper[27819]: I0319 09:48:19.693089 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.699588 master-0 kubenswrapper[27819]: I0319 09:48:19.697808 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 19 09:48:19.734950 master-0 kubenswrapper[27819]: I0319 09:48:19.733960 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-47hh4"] Mar 19 09:48:19.780343 master-0 kubenswrapper[27819]: I0319 09:48:19.780281 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7cq7n" Mar 19 09:48:19.796504 master-0 kubenswrapper[27819]: I0319 09:48:19.796460 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-dns-svc\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.796614 master-0 kubenswrapper[27819]: I0319 09:48:19.796586 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ln4\" (UniqueName: \"kubernetes.io/projected/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-kube-api-access-r2ln4\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.796976 master-0 kubenswrapper[27819]: I0319 09:48:19.796640 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.796976 master-0 kubenswrapper[27819]: I0319 09:48:19.796671 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.796976 master-0 kubenswrapper[27819]: I0319 09:48:19.796758 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-config\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.900180 master-0 kubenswrapper[27819]: I0319 09:48:19.900122 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-config\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.900416 master-0 kubenswrapper[27819]: I0319 09:48:19.900390 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-dns-svc\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.900523 master-0 kubenswrapper[27819]: I0319 09:48:19.900501 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ln4\" (UniqueName: \"kubernetes.io/projected/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-kube-api-access-r2ln4\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.900649 master-0 kubenswrapper[27819]: I0319 09:48:19.900623 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.900697 master-0 kubenswrapper[27819]: I0319 09:48:19.900650 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.901185 master-0 kubenswrapper[27819]: I0319 09:48:19.901143 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-config\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.901475 master-0 kubenswrapper[27819]: I0319 09:48:19.901445 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-sb\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.902091 master-0 kubenswrapper[27819]: I0319 09:48:19.902054 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-dns-svc\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.902157 master-0 kubenswrapper[27819]: I0319 09:48:19.902103 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-nb\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.921276 master-0 kubenswrapper[27819]: I0319 09:48:19.920729 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ln4\" (UniqueName: \"kubernetes.io/projected/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-kube-api-access-r2ln4\") pod \"dnsmasq-dns-76f498f559-47hh4\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:19.954978 master-0 kubenswrapper[27819]: I0319 09:48:19.954926 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 19 09:48:20.004500 master-0 kubenswrapper[27819]: I0319 09:48:20.004440 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 19 09:48:20.106281 master-0 kubenswrapper[27819]: I0319 09:48:20.106136 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:20.185110 master-0 kubenswrapper[27819]: I0319 09:48:20.185061 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-zlg8r"] Mar 19 09:48:20.369900 master-0 kubenswrapper[27819]: W0319 09:48:20.369501 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5253f244_cfca_4e85_b05d_4e1c589f459f.slice/crio-5afd12635a1d88e77b1578964c3b7bc508e9dc385e5cafd8a59279d1141f0a17 WatchSource:0}: Error finding container 5afd12635a1d88e77b1578964c3b7bc508e9dc385e5cafd8a59279d1141f0a17: Status 404 returned error can't find the container with id 5afd12635a1d88e77b1578964c3b7bc508e9dc385e5cafd8a59279d1141f0a17 Mar 19 09:48:20.371365 master-0 kubenswrapper[27819]: I0319 09:48:20.371207 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7cq7n"] Mar 19 09:48:20.629786 master-0 kubenswrapper[27819]: I0319 09:48:20.625595 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-47hh4"] Mar 19 09:48:20.629786 master-0 kubenswrapper[27819]: W0319 09:48:20.628765 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91f2fdc0_7532_40cd_863d_df34b7b3dd0a.slice/crio-49a63e8978182074740808d711f915ce9de049954eb778bc598917efb7da7f9e WatchSource:0}: Error finding container 49a63e8978182074740808d711f915ce9de049954eb778bc598917efb7da7f9e: Status 404 returned error can't find the container with id 49a63e8978182074740808d711f915ce9de049954eb778bc598917efb7da7f9e Mar 19 09:48:20.639099 master-0 kubenswrapper[27819]: I0319 09:48:20.639039 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 19 09:48:20.643699 master-0 kubenswrapper[27819]: I0319 09:48:20.643615 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 19 09:48:20.753783 master-0 kubenswrapper[27819]: I0319 09:48:20.753720 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 19 09:48:21.010706 master-0 kubenswrapper[27819]: I0319 09:48:21.010640 27819 generic.go:334] "Generic (PLEG): container finished" podID="c532c07c-5664-4a96-ae4b-b754322407e8" containerID="306bf968acc075e36c68133c4f2a555af7e8f8d872dec4cd097f06c2c14a2b03" exitCode=0 Mar 19 09:48:21.011246 master-0 kubenswrapper[27819]: I0319 09:48:21.010715 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" event={"ID":"c532c07c-5664-4a96-ae4b-b754322407e8","Type":"ContainerDied","Data":"306bf968acc075e36c68133c4f2a555af7e8f8d872dec4cd097f06c2c14a2b03"} Mar 19 09:48:21.011246 master-0 kubenswrapper[27819]: I0319 09:48:21.010739 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" event={"ID":"c532c07c-5664-4a96-ae4b-b754322407e8","Type":"ContainerStarted","Data":"6e459043ecf8f4128573217c7960d79866d802d96ff7bab4f45938b7fc9b9a20"} Mar 19 09:48:21.012427 master-0 kubenswrapper[27819]: I0319 09:48:21.012384 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7cq7n" event={"ID":"5253f244-cfca-4e85-b05d-4e1c589f459f","Type":"ContainerStarted","Data":"129fa84c71567f670041bce51d74575e808ab793054fd8f2e9fe42770495275c"} Mar 19 09:48:21.012427 master-0 kubenswrapper[27819]: I0319 09:48:21.012419 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7cq7n" event={"ID":"5253f244-cfca-4e85-b05d-4e1c589f459f","Type":"ContainerStarted","Data":"5afd12635a1d88e77b1578964c3b7bc508e9dc385e5cafd8a59279d1141f0a17"} Mar 19 09:48:21.017517 master-0 kubenswrapper[27819]: I0319 09:48:21.017449 27819 generic.go:334] "Generic (PLEG): container finished" podID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerID="f1946869e84c33ac56a3d769f916c095e4dbd52774b6b0a26a3472c408e90e7e" exitCode=0 Mar 19 09:48:21.028002 master-0 kubenswrapper[27819]: I0319 09:48:21.027930 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-47hh4" event={"ID":"91f2fdc0-7532-40cd-863d-df34b7b3dd0a","Type":"ContainerDied","Data":"f1946869e84c33ac56a3d769f916c095e4dbd52774b6b0a26a3472c408e90e7e"} Mar 19 09:48:21.028002 master-0 kubenswrapper[27819]: I0319 09:48:21.028006 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-47hh4" event={"ID":"91f2fdc0-7532-40cd-863d-df34b7b3dd0a","Type":"ContainerStarted","Data":"49a63e8978182074740808d711f915ce9de049954eb778bc598917efb7da7f9e"} Mar 19 09:48:21.061568 master-0 kubenswrapper[27819]: I0319 09:48:21.061071 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7cq7n" podStartSLOduration=2.061049258 podStartE2EDuration="2.061049258s" podCreationTimestamp="2026-03-19 09:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:48:21.050970804 +0000 UTC m=+885.972548496" watchObservedRunningTime="2026-03-19 09:48:21.061049258 +0000 UTC m=+885.982626950" Mar 19 09:48:21.096642 master-0 kubenswrapper[27819]: I0319 09:48:21.086811 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 19 09:48:21.448312 master-0 kubenswrapper[27819]: I0319 09:48:21.448267 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:21.467533 master-0 kubenswrapper[27819]: I0319 09:48:21.467471 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-ovsdbserver-sb\") pod \"c532c07c-5664-4a96-ae4b-b754322407e8\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " Mar 19 09:48:21.467778 master-0 kubenswrapper[27819]: I0319 09:48:21.467683 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-dns-svc\") pod \"c532c07c-5664-4a96-ae4b-b754322407e8\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " Mar 19 09:48:21.467778 master-0 kubenswrapper[27819]: I0319 09:48:21.467708 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xf7\" (UniqueName: \"kubernetes.io/projected/c532c07c-5664-4a96-ae4b-b754322407e8-kube-api-access-n2xf7\") pod \"c532c07c-5664-4a96-ae4b-b754322407e8\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " Mar 19 09:48:21.467920 master-0 kubenswrapper[27819]: I0319 09:48:21.467895 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-config\") pod \"c532c07c-5664-4a96-ae4b-b754322407e8\" (UID: \"c532c07c-5664-4a96-ae4b-b754322407e8\") " Mar 19 09:48:21.473332 master-0 kubenswrapper[27819]: I0319 09:48:21.472995 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c532c07c-5664-4a96-ae4b-b754322407e8-kube-api-access-n2xf7" (OuterVolumeSpecName: "kube-api-access-n2xf7") pod "c532c07c-5664-4a96-ae4b-b754322407e8" (UID: "c532c07c-5664-4a96-ae4b-b754322407e8"). InnerVolumeSpecName "kube-api-access-n2xf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:21.490636 master-0 kubenswrapper[27819]: I0319 09:48:21.490577 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c532c07c-5664-4a96-ae4b-b754322407e8" (UID: "c532c07c-5664-4a96-ae4b-b754322407e8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:21.490925 master-0 kubenswrapper[27819]: I0319 09:48:21.490872 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c532c07c-5664-4a96-ae4b-b754322407e8" (UID: "c532c07c-5664-4a96-ae4b-b754322407e8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:21.491789 master-0 kubenswrapper[27819]: I0319 09:48:21.491734 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-config" (OuterVolumeSpecName: "config") pod "c532c07c-5664-4a96-ae4b-b754322407e8" (UID: "c532c07c-5664-4a96-ae4b-b754322407e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:21.574640 master-0 kubenswrapper[27819]: I0319 09:48:21.571653 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:21.574640 master-0 kubenswrapper[27819]: I0319 09:48:21.571700 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:21.574640 master-0 kubenswrapper[27819]: I0319 09:48:21.571719 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xf7\" (UniqueName: \"kubernetes.io/projected/c532c07c-5664-4a96-ae4b-b754322407e8-kube-api-access-n2xf7\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:21.574640 master-0 kubenswrapper[27819]: I0319 09:48:21.571733 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c532c07c-5664-4a96-ae4b-b754322407e8-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:21.585818 master-0 kubenswrapper[27819]: I0319 09:48:21.585726 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 19 09:48:21.586302 master-0 kubenswrapper[27819]: E0319 09:48:21.586272 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c532c07c-5664-4a96-ae4b-b754322407e8" containerName="init" Mar 19 09:48:21.586302 master-0 kubenswrapper[27819]: I0319 09:48:21.586299 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="c532c07c-5664-4a96-ae4b-b754322407e8" containerName="init" Mar 19 09:48:21.586637 master-0 kubenswrapper[27819]: I0319 09:48:21.586609 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="c532c07c-5664-4a96-ae4b-b754322407e8" containerName="init" Mar 19 09:48:21.588039 master-0 kubenswrapper[27819]: I0319 09:48:21.588008 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 19 09:48:21.593239 master-0 kubenswrapper[27819]: I0319 09:48:21.592310 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 19 09:48:21.596289 master-0 kubenswrapper[27819]: I0319 09:48:21.596228 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 19 09:48:21.596516 master-0 kubenswrapper[27819]: I0319 09:48:21.596492 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 19 09:48:21.611471 master-0 kubenswrapper[27819]: I0319 09:48:21.611410 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 19 09:48:21.678572 master-0 kubenswrapper[27819]: I0319 09:48:21.673722 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.678572 master-0 kubenswrapper[27819]: I0319 09:48:21.673897 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7g2\" (UniqueName: \"kubernetes.io/projected/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-kube-api-access-2l7g2\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.678572 master-0 kubenswrapper[27819]: I0319 09:48:21.673947 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.678572 master-0 kubenswrapper[27819]: I0319 09:48:21.674018 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.678572 master-0 kubenswrapper[27819]: I0319 09:48:21.674300 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.678572 master-0 kubenswrapper[27819]: I0319 09:48:21.674331 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-config\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.678572 master-0 kubenswrapper[27819]: I0319 09:48:21.674442 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-scripts\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.776393 master-0 kubenswrapper[27819]: I0319 09:48:21.776277 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.776393 master-0 kubenswrapper[27819]: I0319 09:48:21.776375 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l7g2\" (UniqueName: \"kubernetes.io/projected/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-kube-api-access-2l7g2\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.776393 master-0 kubenswrapper[27819]: I0319 09:48:21.776400 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.776393 master-0 kubenswrapper[27819]: I0319 09:48:21.776419 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.776821 master-0 kubenswrapper[27819]: I0319 09:48:21.776494 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.776821 master-0 kubenswrapper[27819]: I0319 09:48:21.776515 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-config\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.776821 master-0 kubenswrapper[27819]: I0319 09:48:21.776577 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-scripts\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.777398 master-0 kubenswrapper[27819]: I0319 09:48:21.777352 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-scripts\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.777715 master-0 kubenswrapper[27819]: I0319 09:48:21.777681 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.779753 master-0 kubenswrapper[27819]: I0319 09:48:21.779700 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-config\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.782093 master-0 kubenswrapper[27819]: I0319 09:48:21.782038 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.785248 master-0 kubenswrapper[27819]: I0319 09:48:21.785211 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.795228 master-0 kubenswrapper[27819]: I0319 09:48:21.795178 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.796264 master-0 kubenswrapper[27819]: I0319 09:48:21.796203 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l7g2\" (UniqueName: \"kubernetes.io/projected/3919ee28-a63c-4746-b7c1-66a3fb41e8c5-kube-api-access-2l7g2\") pod \"ovn-northd-0\" (UID: \"3919ee28-a63c-4746-b7c1-66a3fb41e8c5\") " pod="openstack/ovn-northd-0" Mar 19 09:48:21.962815 master-0 kubenswrapper[27819]: I0319 09:48:21.962676 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 19 09:48:22.037371 master-0 kubenswrapper[27819]: I0319 09:48:22.037308 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" Mar 19 09:48:22.038000 master-0 kubenswrapper[27819]: I0319 09:48:22.037360 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65db7fd8ff-zlg8r" event={"ID":"c532c07c-5664-4a96-ae4b-b754322407e8","Type":"ContainerDied","Data":"6e459043ecf8f4128573217c7960d79866d802d96ff7bab4f45938b7fc9b9a20"} Mar 19 09:48:22.038000 master-0 kubenswrapper[27819]: I0319 09:48:22.037433 27819 scope.go:117] "RemoveContainer" containerID="306bf968acc075e36c68133c4f2a555af7e8f8d872dec4cd097f06c2c14a2b03" Mar 19 09:48:22.049686 master-0 kubenswrapper[27819]: I0319 09:48:22.049612 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-47hh4" event={"ID":"91f2fdc0-7532-40cd-863d-df34b7b3dd0a","Type":"ContainerStarted","Data":"7198ff6ed2bce0ba59d6628c7f45d9f370816db93daa0790dd146237a2e0d5a4"} Mar 19 09:48:22.097967 master-0 kubenswrapper[27819]: I0319 09:48:22.097878 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76f498f559-47hh4" podStartSLOduration=3.097858165 podStartE2EDuration="3.097858165s" podCreationTimestamp="2026-03-19 09:48:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:48:22.074358844 +0000 UTC m=+886.995936536" watchObservedRunningTime="2026-03-19 09:48:22.097858165 +0000 UTC m=+887.019435857" Mar 19 09:48:22.162209 master-0 kubenswrapper[27819]: I0319 09:48:22.162133 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-zlg8r"] Mar 19 09:48:22.176629 master-0 kubenswrapper[27819]: I0319 09:48:22.175845 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65db7fd8ff-zlg8r"] Mar 19 09:48:22.450967 master-0 kubenswrapper[27819]: I0319 09:48:22.450907 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 19 09:48:22.453298 master-0 kubenswrapper[27819]: W0319 09:48:22.453038 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3919ee28_a63c_4746_b7c1_66a3fb41e8c5.slice/crio-0dc51104f36a8f391bb726583bdda2135e8092bf3621a46b70f6202c6b47bf6a WatchSource:0}: Error finding container 0dc51104f36a8f391bb726583bdda2135e8092bf3621a46b70f6202c6b47bf6a: Status 404 returned error can't find the container with id 0dc51104f36a8f391bb726583bdda2135e8092bf3621a46b70f6202c6b47bf6a Mar 19 09:48:23.071197 master-0 kubenswrapper[27819]: I0319 09:48:23.071138 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3919ee28-a63c-4746-b7c1-66a3fb41e8c5","Type":"ContainerStarted","Data":"0dc51104f36a8f391bb726583bdda2135e8092bf3621a46b70f6202c6b47bf6a"} Mar 19 09:48:23.072085 master-0 kubenswrapper[27819]: I0319 09:48:23.071410 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:23.297785 master-0 kubenswrapper[27819]: I0319 09:48:23.297713 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c532c07c-5664-4a96-ae4b-b754322407e8" path="/var/lib/kubelet/pods/c532c07c-5664-4a96-ae4b-b754322407e8/volumes" Mar 19 09:48:23.370077 master-0 kubenswrapper[27819]: I0319 09:48:23.369968 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 19 09:48:23.467748 master-0 kubenswrapper[27819]: I0319 09:48:23.467692 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 19 09:48:24.083267 master-0 kubenswrapper[27819]: I0319 09:48:24.083204 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3919ee28-a63c-4746-b7c1-66a3fb41e8c5","Type":"ContainerStarted","Data":"af1e62b8fd0af0015c24ac0136178c4600518ea4e88d92062f3c2f7d2270a7fc"} Mar 19 09:48:25.093410 master-0 kubenswrapper[27819]: I0319 09:48:25.093355 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3919ee28-a63c-4746-b7c1-66a3fb41e8c5","Type":"ContainerStarted","Data":"d4779a6e56b16fd6e3c6482662df31b213fb7f8d9f2297d5c93009c71a892bf5"} Mar 19 09:48:25.094294 master-0 kubenswrapper[27819]: I0319 09:48:25.094262 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 19 09:48:25.227151 master-0 kubenswrapper[27819]: I0319 09:48:25.227051 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.9333631000000002 podStartE2EDuration="4.227028956s" podCreationTimestamp="2026-03-19 09:48:21 +0000 UTC" firstStartedPulling="2026-03-19 09:48:22.455477805 +0000 UTC m=+887.377055497" lastFinishedPulling="2026-03-19 09:48:23.749143661 +0000 UTC m=+888.670721353" observedRunningTime="2026-03-19 09:48:25.226921433 +0000 UTC m=+890.148499145" watchObservedRunningTime="2026-03-19 09:48:25.227028956 +0000 UTC m=+890.148606648" Mar 19 09:48:25.327346 master-0 kubenswrapper[27819]: I0319 09:48:25.327295 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-swwmg"] Mar 19 09:48:25.328875 master-0 kubenswrapper[27819]: I0319 09:48:25.328851 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.341290 master-0 kubenswrapper[27819]: I0319 09:48:25.341255 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 19 09:48:25.341728 master-0 kubenswrapper[27819]: I0319 09:48:25.341679 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-swwmg"] Mar 19 09:48:25.464918 master-0 kubenswrapper[27819]: I0319 09:48:25.464868 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz59q\" (UniqueName: \"kubernetes.io/projected/b6298d46-2954-436b-aed1-cd9c0f7e91e4-kube-api-access-dz59q\") pod \"root-account-create-update-swwmg\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.465165 master-0 kubenswrapper[27819]: I0319 09:48:25.465003 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6298d46-2954-436b-aed1-cd9c0f7e91e4-operator-scripts\") pod \"root-account-create-update-swwmg\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.567529 master-0 kubenswrapper[27819]: I0319 09:48:25.567476 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz59q\" (UniqueName: \"kubernetes.io/projected/b6298d46-2954-436b-aed1-cd9c0f7e91e4-kube-api-access-dz59q\") pod \"root-account-create-update-swwmg\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.567881 master-0 kubenswrapper[27819]: I0319 09:48:25.567862 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6298d46-2954-436b-aed1-cd9c0f7e91e4-operator-scripts\") pod \"root-account-create-update-swwmg\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.568562 master-0 kubenswrapper[27819]: I0319 09:48:25.568499 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6298d46-2954-436b-aed1-cd9c0f7e91e4-operator-scripts\") pod \"root-account-create-update-swwmg\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.588012 master-0 kubenswrapper[27819]: I0319 09:48:25.587958 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz59q\" (UniqueName: \"kubernetes.io/projected/b6298d46-2954-436b-aed1-cd9c0f7e91e4-kube-api-access-dz59q\") pod \"root-account-create-update-swwmg\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.625946 master-0 kubenswrapper[27819]: I0319 09:48:25.625884 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 19 09:48:25.647797 master-0 kubenswrapper[27819]: I0319 09:48:25.647741 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:25.704331 master-0 kubenswrapper[27819]: I0319 09:48:25.704287 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 19 09:48:26.146467 master-0 kubenswrapper[27819]: I0319 09:48:26.146400 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-swwmg"] Mar 19 09:48:26.340401 master-0 kubenswrapper[27819]: I0319 09:48:26.340347 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-q4ngh"] Mar 19 09:48:26.342700 master-0 kubenswrapper[27819]: I0319 09:48:26.342671 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:26.380849 master-0 kubenswrapper[27819]: I0319 09:48:26.380525 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q4ngh"] Mar 19 09:48:26.503389 master-0 kubenswrapper[27819]: I0319 09:48:26.503251 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60be1ce-b91b-497c-935b-2f8d245d6f8f-operator-scripts\") pod \"keystone-db-create-q4ngh\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:26.503786 master-0 kubenswrapper[27819]: I0319 09:48:26.503758 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4d7v\" (UniqueName: \"kubernetes.io/projected/a60be1ce-b91b-497c-935b-2f8d245d6f8f-kube-api-access-n4d7v\") pod \"keystone-db-create-q4ngh\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:26.532038 master-0 kubenswrapper[27819]: I0319 09:48:26.531975 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6a2c-account-create-update-bhtkr"] Mar 19 09:48:26.533907 master-0 kubenswrapper[27819]: I0319 09:48:26.533876 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:26.539404 master-0 kubenswrapper[27819]: I0319 09:48:26.536490 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 19 09:48:26.547337 master-0 kubenswrapper[27819]: I0319 09:48:26.547273 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6a2c-account-create-update-bhtkr"] Mar 19 09:48:26.607724 master-0 kubenswrapper[27819]: I0319 09:48:26.607655 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4d7v\" (UniqueName: \"kubernetes.io/projected/a60be1ce-b91b-497c-935b-2f8d245d6f8f-kube-api-access-n4d7v\") pod \"keystone-db-create-q4ngh\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:26.607924 master-0 kubenswrapper[27819]: I0319 09:48:26.607777 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60be1ce-b91b-497c-935b-2f8d245d6f8f-operator-scripts\") pod \"keystone-db-create-q4ngh\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:26.608816 master-0 kubenswrapper[27819]: I0319 09:48:26.608773 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60be1ce-b91b-497c-935b-2f8d245d6f8f-operator-scripts\") pod \"keystone-db-create-q4ngh\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:26.708941 master-0 kubenswrapper[27819]: I0319 09:48:26.708888 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdmdg\" (UniqueName: \"kubernetes.io/projected/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-kube-api-access-hdmdg\") pod \"keystone-6a2c-account-create-update-bhtkr\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:26.709081 master-0 kubenswrapper[27819]: I0319 09:48:26.708974 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-operator-scripts\") pod \"keystone-6a2c-account-create-update-bhtkr\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:26.811236 master-0 kubenswrapper[27819]: I0319 09:48:26.810969 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdmdg\" (UniqueName: \"kubernetes.io/projected/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-kube-api-access-hdmdg\") pod \"keystone-6a2c-account-create-update-bhtkr\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:26.811236 master-0 kubenswrapper[27819]: I0319 09:48:26.811075 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-operator-scripts\") pod \"keystone-6a2c-account-create-update-bhtkr\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:26.812438 master-0 kubenswrapper[27819]: I0319 09:48:26.812390 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-operator-scripts\") pod \"keystone-6a2c-account-create-update-bhtkr\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:26.821556 master-0 kubenswrapper[27819]: I0319 09:48:26.821473 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4d7v\" (UniqueName: \"kubernetes.io/projected/a60be1ce-b91b-497c-935b-2f8d245d6f8f-kube-api-access-n4d7v\") pod \"keystone-db-create-q4ngh\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:26.892780 master-0 kubenswrapper[27819]: E0319 09:48:26.892709 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6298d46_2954_436b_aed1_cd9c0f7e91e4.slice/crio-e89c043f715c92ab3c228a470bb485aa633f42531b1b8de9e6d8ba3fe662cf3d.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:48:26.968404 master-0 kubenswrapper[27819]: I0319 09:48:26.968337 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:27.140463 master-0 kubenswrapper[27819]: I0319 09:48:27.140397 27819 generic.go:334] "Generic (PLEG): container finished" podID="b6298d46-2954-436b-aed1-cd9c0f7e91e4" containerID="e89c043f715c92ab3c228a470bb485aa633f42531b1b8de9e6d8ba3fe662cf3d" exitCode=0 Mar 19 09:48:27.140463 master-0 kubenswrapper[27819]: I0319 09:48:27.140464 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-swwmg" event={"ID":"b6298d46-2954-436b-aed1-cd9c0f7e91e4","Type":"ContainerDied","Data":"e89c043f715c92ab3c228a470bb485aa633f42531b1b8de9e6d8ba3fe662cf3d"} Mar 19 09:48:27.140714 master-0 kubenswrapper[27819]: I0319 09:48:27.140497 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-swwmg" event={"ID":"b6298d46-2954-436b-aed1-cd9c0f7e91e4","Type":"ContainerStarted","Data":"fa273b34d094b3392831fff85622229618b8aafe92e0c908840c7c5ef0d39fc2"} Mar 19 09:48:27.311216 master-0 kubenswrapper[27819]: I0319 09:48:27.311107 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xnr86"] Mar 19 09:48:27.312816 master-0 kubenswrapper[27819]: I0319 09:48:27.312735 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.402333 master-0 kubenswrapper[27819]: I0319 09:48:27.401528 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xnr86"] Mar 19 09:48:27.430932 master-0 kubenswrapper[27819]: I0319 09:48:27.430845 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmd29\" (UniqueName: \"kubernetes.io/projected/39e53946-899c-430a-8758-b8f7a30e3897-kube-api-access-fmd29\") pod \"placement-db-create-xnr86\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.431241 master-0 kubenswrapper[27819]: I0319 09:48:27.431202 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e53946-899c-430a-8758-b8f7a30e3897-operator-scripts\") pod \"placement-db-create-xnr86\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.432403 master-0 kubenswrapper[27819]: I0319 09:48:27.432367 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdmdg\" (UniqueName: \"kubernetes.io/projected/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-kube-api-access-hdmdg\") pod \"keystone-6a2c-account-create-update-bhtkr\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:27.476862 master-0 kubenswrapper[27819]: I0319 09:48:27.475698 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:27.477870 master-0 kubenswrapper[27819]: I0319 09:48:27.477412 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-q4ngh"] Mar 19 09:48:27.512555 master-0 kubenswrapper[27819]: I0319 09:48:27.512436 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-080c-account-create-update-9fljq"] Mar 19 09:48:27.516396 master-0 kubenswrapper[27819]: I0319 09:48:27.515105 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.529913 master-0 kubenswrapper[27819]: I0319 09:48:27.522150 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 19 09:48:27.530759 master-0 kubenswrapper[27819]: I0319 09:48:27.530688 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-080c-account-create-update-9fljq"] Mar 19 09:48:27.536279 master-0 kubenswrapper[27819]: I0319 09:48:27.536237 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e53946-899c-430a-8758-b8f7a30e3897-operator-scripts\") pod \"placement-db-create-xnr86\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.536846 master-0 kubenswrapper[27819]: I0319 09:48:27.536822 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmd29\" (UniqueName: \"kubernetes.io/projected/39e53946-899c-430a-8758-b8f7a30e3897-kube-api-access-fmd29\") pod \"placement-db-create-xnr86\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.537349 master-0 kubenswrapper[27819]: I0319 09:48:27.537289 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e53946-899c-430a-8758-b8f7a30e3897-operator-scripts\") pod \"placement-db-create-xnr86\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.563706 master-0 kubenswrapper[27819]: I0319 09:48:27.563424 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmd29\" (UniqueName: \"kubernetes.io/projected/39e53946-899c-430a-8758-b8f7a30e3897-kube-api-access-fmd29\") pod \"placement-db-create-xnr86\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.636596 master-0 kubenswrapper[27819]: I0319 09:48:27.631818 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xnr86" Mar 19 09:48:27.641105 master-0 kubenswrapper[27819]: I0319 09:48:27.638942 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh5g7\" (UniqueName: \"kubernetes.io/projected/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-kube-api-access-vh5g7\") pod \"placement-080c-account-create-update-9fljq\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.641105 master-0 kubenswrapper[27819]: I0319 09:48:27.639118 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-operator-scripts\") pod \"placement-080c-account-create-update-9fljq\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.772589 master-0 kubenswrapper[27819]: I0319 09:48:27.755478 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-operator-scripts\") pod \"placement-080c-account-create-update-9fljq\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.772589 master-0 kubenswrapper[27819]: I0319 09:48:27.755664 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh5g7\" (UniqueName: \"kubernetes.io/projected/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-kube-api-access-vh5g7\") pod \"placement-080c-account-create-update-9fljq\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.772589 master-0 kubenswrapper[27819]: I0319 09:48:27.757139 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-operator-scripts\") pod \"placement-080c-account-create-update-9fljq\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.802017 master-0 kubenswrapper[27819]: I0319 09:48:27.801158 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh5g7\" (UniqueName: \"kubernetes.io/projected/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-kube-api-access-vh5g7\") pod \"placement-080c-account-create-update-9fljq\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.857749 master-0 kubenswrapper[27819]: I0319 09:48:27.856921 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-47hh4"] Mar 19 09:48:27.857749 master-0 kubenswrapper[27819]: I0319 09:48:27.857194 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76f498f559-47hh4" podUID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerName="dnsmasq-dns" containerID="cri-o://7198ff6ed2bce0ba59d6628c7f45d9f370816db93daa0790dd146237a2e0d5a4" gracePeriod=10 Mar 19 09:48:27.900944 master-0 kubenswrapper[27819]: I0319 09:48:27.900709 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:27.902859 master-0 kubenswrapper[27819]: I0319 09:48:27.902334 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:27.920308 master-0 kubenswrapper[27819]: I0319 09:48:27.916750 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-ht7bl"] Mar 19 09:48:27.924737 master-0 kubenswrapper[27819]: I0319 09:48:27.924671 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.004236 master-0 kubenswrapper[27819]: I0319 09:48:28.004171 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.004353 master-0 kubenswrapper[27819]: I0319 09:48:28.004281 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-config\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.004434 master-0 kubenswrapper[27819]: I0319 09:48:28.004406 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.005299 master-0 kubenswrapper[27819]: I0319 09:48:28.004475 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.005299 master-0 kubenswrapper[27819]: I0319 09:48:28.005108 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4vl8\" (UniqueName: \"kubernetes.io/projected/e8f2abdd-185a-42c6-9cb8-1a905b907791-kube-api-access-p4vl8\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.033639 master-0 kubenswrapper[27819]: I0319 09:48:28.026661 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-ht7bl"] Mar 19 09:48:28.106687 master-0 kubenswrapper[27819]: I0319 09:48:28.106149 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.106687 master-0 kubenswrapper[27819]: I0319 09:48:28.106210 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.106687 master-0 kubenswrapper[27819]: I0319 09:48:28.106246 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4vl8\" (UniqueName: \"kubernetes.io/projected/e8f2abdd-185a-42c6-9cb8-1a905b907791-kube-api-access-p4vl8\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.106687 master-0 kubenswrapper[27819]: I0319 09:48:28.106324 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.106687 master-0 kubenswrapper[27819]: I0319 09:48:28.106354 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-config\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.110271 master-0 kubenswrapper[27819]: I0319 09:48:28.107182 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-config\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.110271 master-0 kubenswrapper[27819]: I0319 09:48:28.109451 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-dns-svc\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.110271 master-0 kubenswrapper[27819]: I0319 09:48:28.110156 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.124793 master-0 kubenswrapper[27819]: I0319 09:48:28.111801 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.147412 master-0 kubenswrapper[27819]: I0319 09:48:28.146317 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4vl8\" (UniqueName: \"kubernetes.io/projected/e8f2abdd-185a-42c6-9cb8-1a905b907791-kube-api-access-p4vl8\") pod \"dnsmasq-dns-5bf8b865dc-ht7bl\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.186206 master-0 kubenswrapper[27819]: I0319 09:48:28.182269 27819 generic.go:334] "Generic (PLEG): container finished" podID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerID="7198ff6ed2bce0ba59d6628c7f45d9f370816db93daa0790dd146237a2e0d5a4" exitCode=0 Mar 19 09:48:28.186206 master-0 kubenswrapper[27819]: I0319 09:48:28.182341 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-47hh4" event={"ID":"91f2fdc0-7532-40cd-863d-df34b7b3dd0a","Type":"ContainerDied","Data":"7198ff6ed2bce0ba59d6628c7f45d9f370816db93daa0790dd146237a2e0d5a4"} Mar 19 09:48:28.204950 master-0 kubenswrapper[27819]: I0319 09:48:28.203050 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q4ngh" event={"ID":"a60be1ce-b91b-497c-935b-2f8d245d6f8f","Type":"ContainerStarted","Data":"295597e41eb1a3bf943e421ab4b58d3e0a34d849839eede8c6fd6242988132fb"} Mar 19 09:48:28.204950 master-0 kubenswrapper[27819]: I0319 09:48:28.203113 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q4ngh" event={"ID":"a60be1ce-b91b-497c-935b-2f8d245d6f8f","Type":"ContainerStarted","Data":"6a903df27069c3ffcdffe5c209813e61154a2a77e4c91beccbea48d083d5ebc8"} Mar 19 09:48:28.330753 master-0 kubenswrapper[27819]: I0319 09:48:28.325232 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:28.368771 master-0 kubenswrapper[27819]: I0319 09:48:28.357908 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-q4ngh" podStartSLOduration=2.357885462 podStartE2EDuration="2.357885462s" podCreationTimestamp="2026-03-19 09:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:48:28.35377151 +0000 UTC m=+893.275349292" watchObservedRunningTime="2026-03-19 09:48:28.357885462 +0000 UTC m=+893.279463154" Mar 19 09:48:28.403361 master-0 kubenswrapper[27819]: I0319 09:48:28.403308 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6a2c-account-create-update-bhtkr"] Mar 19 09:48:28.820515 master-0 kubenswrapper[27819]: I0319 09:48:28.820432 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xnr86"] Mar 19 09:48:29.051818 master-0 kubenswrapper[27819]: I0319 09:48:29.051699 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-080c-account-create-update-9fljq"] Mar 19 09:48:29.079085 master-0 kubenswrapper[27819]: I0319 09:48:29.078965 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:29.081032 master-0 kubenswrapper[27819]: W0319 09:48:29.080962 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce72d5ce_6259_47ba_a860_7b45dafbbf7a.slice/crio-6a9fb0513a5bc633959e6201c7c389d662773e5a723302e2552ce3055c9ab6ed WatchSource:0}: Error finding container 6a9fb0513a5bc633959e6201c7c389d662773e5a723302e2552ce3055c9ab6ed: Status 404 returned error can't find the container with id 6a9fb0513a5bc633959e6201c7c389d662773e5a723302e2552ce3055c9ab6ed Mar 19 09:48:29.107206 master-0 kubenswrapper[27819]: I0319 09:48:29.101669 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-dns-svc\") pod \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " Mar 19 09:48:29.107206 master-0 kubenswrapper[27819]: I0319 09:48:29.101754 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-sb\") pod \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " Mar 19 09:48:29.107206 master-0 kubenswrapper[27819]: I0319 09:48:29.101797 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-nb\") pod \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " Mar 19 09:48:29.107206 master-0 kubenswrapper[27819]: I0319 09:48:29.101857 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ln4\" (UniqueName: \"kubernetes.io/projected/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-kube-api-access-r2ln4\") pod \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " Mar 19 09:48:29.107206 master-0 kubenswrapper[27819]: I0319 09:48:29.101884 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-config\") pod \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\" (UID: \"91f2fdc0-7532-40cd-863d-df34b7b3dd0a\") " Mar 19 09:48:29.156105 master-0 kubenswrapper[27819]: I0319 09:48:29.155964 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-kube-api-access-r2ln4" (OuterVolumeSpecName: "kube-api-access-r2ln4") pod "91f2fdc0-7532-40cd-863d-df34b7b3dd0a" (UID: "91f2fdc0-7532-40cd-863d-df34b7b3dd0a"). InnerVolumeSpecName "kube-api-access-r2ln4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:29.209310 master-0 kubenswrapper[27819]: I0319 09:48:29.209247 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ln4\" (UniqueName: \"kubernetes.io/projected/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-kube-api-access-r2ln4\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:29.232717 master-0 kubenswrapper[27819]: I0319 09:48:29.232670 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-ht7bl"] Mar 19 09:48:29.234476 master-0 kubenswrapper[27819]: I0319 09:48:29.234426 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-config" (OuterVolumeSpecName: "config") pod "91f2fdc0-7532-40cd-863d-df34b7b3dd0a" (UID: "91f2fdc0-7532-40cd-863d-df34b7b3dd0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:29.248657 master-0 kubenswrapper[27819]: I0319 09:48:29.245823 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xnr86" event={"ID":"39e53946-899c-430a-8758-b8f7a30e3897","Type":"ContainerStarted","Data":"7a16d8e44045d02299ec0518ea290b8605d9c2066bf78d9f1ece95319aed7c01"} Mar 19 09:48:29.248657 master-0 kubenswrapper[27819]: I0319 09:48:29.245895 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xnr86" event={"ID":"39e53946-899c-430a-8758-b8f7a30e3897","Type":"ContainerStarted","Data":"e4f08c2085adeecd97daec059834b4714eee5db060d2065f368d4bddea9b98aa"} Mar 19 09:48:29.251718 master-0 kubenswrapper[27819]: I0319 09:48:29.251115 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91f2fdc0-7532-40cd-863d-df34b7b3dd0a" (UID: "91f2fdc0-7532-40cd-863d-df34b7b3dd0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:29.261192 master-0 kubenswrapper[27819]: I0319 09:48:29.258513 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91f2fdc0-7532-40cd-863d-df34b7b3dd0a" (UID: "91f2fdc0-7532-40cd-863d-df34b7b3dd0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:29.270413 master-0 kubenswrapper[27819]: I0319 09:48:29.270358 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a2c-account-create-update-bhtkr" event={"ID":"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a","Type":"ContainerStarted","Data":"16d7e745ecf80d32283d79a3110151f972abf05e38483efb25335c84976520f2"} Mar 19 09:48:29.270630 master-0 kubenswrapper[27819]: I0319 09:48:29.270469 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a2c-account-create-update-bhtkr" event={"ID":"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a","Type":"ContainerStarted","Data":"014d4ded5662e57dc479392a2692e8dc60af6c84bc69d3c3ebb3aaf10ff593e4"} Mar 19 09:48:29.275556 master-0 kubenswrapper[27819]: I0319 09:48:29.275486 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-xnr86" podStartSLOduration=3.275469654 podStartE2EDuration="3.275469654s" podCreationTimestamp="2026-03-19 09:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:48:29.268445152 +0000 UTC m=+894.190022844" watchObservedRunningTime="2026-03-19 09:48:29.275469654 +0000 UTC m=+894.197047346" Mar 19 09:48:29.307990 master-0 kubenswrapper[27819]: I0319 09:48:29.307935 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91f2fdc0-7532-40cd-863d-df34b7b3dd0a" (UID: "91f2fdc0-7532-40cd-863d-df34b7b3dd0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:29.308389 master-0 kubenswrapper[27819]: I0319 09:48:29.308366 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f498f559-47hh4" Mar 19 09:48:29.312127 master-0 kubenswrapper[27819]: I0319 09:48:29.311993 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:29.312324 master-0 kubenswrapper[27819]: I0319 09:48:29.312307 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:29.312403 master-0 kubenswrapper[27819]: I0319 09:48:29.312392 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:29.312466 master-0 kubenswrapper[27819]: I0319 09:48:29.312456 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91f2fdc0-7532-40cd-863d-df34b7b3dd0a-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:29.313107 master-0 kubenswrapper[27819]: I0319 09:48:29.313066 27819 generic.go:334] "Generic (PLEG): container finished" podID="a60be1ce-b91b-497c-935b-2f8d245d6f8f" containerID="295597e41eb1a3bf943e421ab4b58d3e0a34d849839eede8c6fd6242988132fb" exitCode=0 Mar 19 09:48:29.315356 master-0 kubenswrapper[27819]: I0319 09:48:29.314991 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6a2c-account-create-update-bhtkr" podStartSLOduration=3.3149744820000002 podStartE2EDuration="3.314974482s" podCreationTimestamp="2026-03-19 09:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:48:29.296513388 +0000 UTC m=+894.218091080" watchObservedRunningTime="2026-03-19 09:48:29.314974482 +0000 UTC m=+894.236552184" Mar 19 09:48:29.538449 master-0 kubenswrapper[27819]: I0319 09:48:29.538382 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-080c-account-create-update-9fljq" event={"ID":"ce72d5ce-6259-47ba-a860-7b45dafbbf7a","Type":"ContainerStarted","Data":"6a9fb0513a5bc633959e6201c7c389d662773e5a723302e2552ce3055c9ab6ed"} Mar 19 09:48:29.538449 master-0 kubenswrapper[27819]: I0319 09:48:29.538464 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f498f559-47hh4" event={"ID":"91f2fdc0-7532-40cd-863d-df34b7b3dd0a","Type":"ContainerDied","Data":"49a63e8978182074740808d711f915ce9de049954eb778bc598917efb7da7f9e"} Mar 19 09:48:29.539107 master-0 kubenswrapper[27819]: I0319 09:48:29.538488 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q4ngh" event={"ID":"a60be1ce-b91b-497c-935b-2f8d245d6f8f","Type":"ContainerDied","Data":"295597e41eb1a3bf943e421ab4b58d3e0a34d849839eede8c6fd6242988132fb"} Mar 19 09:48:29.539107 master-0 kubenswrapper[27819]: I0319 09:48:29.538507 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-swwmg" event={"ID":"b6298d46-2954-436b-aed1-cd9c0f7e91e4","Type":"ContainerDied","Data":"fa273b34d094b3392831fff85622229618b8aafe92e0c908840c7c5ef0d39fc2"} Mar 19 09:48:29.539107 master-0 kubenswrapper[27819]: I0319 09:48:29.538523 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa273b34d094b3392831fff85622229618b8aafe92e0c908840c7c5ef0d39fc2" Mar 19 09:48:29.539107 master-0 kubenswrapper[27819]: I0319 09:48:29.538562 27819 scope.go:117] "RemoveContainer" containerID="7198ff6ed2bce0ba59d6628c7f45d9f370816db93daa0790dd146237a2e0d5a4" Mar 19 09:48:29.614069 master-0 kubenswrapper[27819]: I0319 09:48:29.614013 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:29.630646 master-0 kubenswrapper[27819]: I0319 09:48:29.629000 27819 scope.go:117] "RemoveContainer" containerID="f1946869e84c33ac56a3d769f916c095e4dbd52774b6b0a26a3472c408e90e7e" Mar 19 09:48:29.676261 master-0 kubenswrapper[27819]: I0319 09:48:29.676102 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-47hh4"] Mar 19 09:48:29.685270 master-0 kubenswrapper[27819]: I0319 09:48:29.685204 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76f498f559-47hh4"] Mar 19 09:48:29.724561 master-0 kubenswrapper[27819]: I0319 09:48:29.723616 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz59q\" (UniqueName: \"kubernetes.io/projected/b6298d46-2954-436b-aed1-cd9c0f7e91e4-kube-api-access-dz59q\") pod \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " Mar 19 09:48:29.724561 master-0 kubenswrapper[27819]: I0319 09:48:29.723767 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6298d46-2954-436b-aed1-cd9c0f7e91e4-operator-scripts\") pod \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\" (UID: \"b6298d46-2954-436b-aed1-cd9c0f7e91e4\") " Mar 19 09:48:29.730774 master-0 kubenswrapper[27819]: I0319 09:48:29.726825 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6298d46-2954-436b-aed1-cd9c0f7e91e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6298d46-2954-436b-aed1-cd9c0f7e91e4" (UID: "b6298d46-2954-436b-aed1-cd9c0f7e91e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:29.734290 master-0 kubenswrapper[27819]: I0319 09:48:29.734236 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6298d46-2954-436b-aed1-cd9c0f7e91e4-kube-api-access-dz59q" (OuterVolumeSpecName: "kube-api-access-dz59q") pod "b6298d46-2954-436b-aed1-cd9c0f7e91e4" (UID: "b6298d46-2954-436b-aed1-cd9c0f7e91e4"). InnerVolumeSpecName "kube-api-access-dz59q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:29.829423 master-0 kubenswrapper[27819]: I0319 09:48:29.829361 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz59q\" (UniqueName: \"kubernetes.io/projected/b6298d46-2954-436b-aed1-cd9c0f7e91e4-kube-api-access-dz59q\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:29.829423 master-0 kubenswrapper[27819]: I0319 09:48:29.829408 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6298d46-2954-436b-aed1-cd9c0f7e91e4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:29.856462 master-0 kubenswrapper[27819]: I0319 09:48:29.856318 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 19 09:48:29.856805 master-0 kubenswrapper[27819]: E0319 09:48:29.856764 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6298d46-2954-436b-aed1-cd9c0f7e91e4" containerName="mariadb-account-create-update" Mar 19 09:48:29.856805 master-0 kubenswrapper[27819]: I0319 09:48:29.856779 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6298d46-2954-436b-aed1-cd9c0f7e91e4" containerName="mariadb-account-create-update" Mar 19 09:48:29.856887 master-0 kubenswrapper[27819]: E0319 09:48:29.856820 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerName="dnsmasq-dns" Mar 19 09:48:29.856887 master-0 kubenswrapper[27819]: I0319 09:48:29.856828 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerName="dnsmasq-dns" Mar 19 09:48:29.856887 master-0 kubenswrapper[27819]: E0319 09:48:29.856851 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerName="init" Mar 19 09:48:29.856887 master-0 kubenswrapper[27819]: I0319 09:48:29.856865 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerName="init" Mar 19 09:48:29.857128 master-0 kubenswrapper[27819]: I0319 09:48:29.857073 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" containerName="dnsmasq-dns" Mar 19 09:48:29.857128 master-0 kubenswrapper[27819]: I0319 09:48:29.857100 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6298d46-2954-436b-aed1-cd9c0f7e91e4" containerName="mariadb-account-create-update" Mar 19 09:48:29.868787 master-0 kubenswrapper[27819]: I0319 09:48:29.868091 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 19 09:48:29.872631 master-0 kubenswrapper[27819]: I0319 09:48:29.872589 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 19 09:48:29.872830 master-0 kubenswrapper[27819]: I0319 09:48:29.872673 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 19 09:48:29.872830 master-0 kubenswrapper[27819]: I0319 09:48:29.872617 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 19 09:48:29.884700 master-0 kubenswrapper[27819]: I0319 09:48:29.877079 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 19 09:48:30.032889 master-0 kubenswrapper[27819]: I0319 09:48:30.032754 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-70ed5fc2-e581-4c2f-8361-5a7572c28a72\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4212fcc3-2815-44a3-8fbe-f65860653551\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.032889 master-0 kubenswrapper[27819]: I0319 09:48:30.032806 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/53eef9d1-14df-45aa-ae9b-bc7583066d10-lock\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.032889 master-0 kubenswrapper[27819]: I0319 09:48:30.032838 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7wt\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-kube-api-access-4r7wt\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.032889 master-0 kubenswrapper[27819]: I0319 09:48:30.032857 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/53eef9d1-14df-45aa-ae9b-bc7583066d10-cache\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.033225 master-0 kubenswrapper[27819]: I0319 09:48:30.032957 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.033225 master-0 kubenswrapper[27819]: I0319 09:48:30.033171 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53eef9d1-14df-45aa-ae9b-bc7583066d10-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.134787 master-0 kubenswrapper[27819]: I0319 09:48:30.134711 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53eef9d1-14df-45aa-ae9b-bc7583066d10-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.135029 master-0 kubenswrapper[27819]: I0319 09:48:30.134901 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-70ed5fc2-e581-4c2f-8361-5a7572c28a72\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4212fcc3-2815-44a3-8fbe-f65860653551\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.135029 master-0 kubenswrapper[27819]: I0319 09:48:30.134952 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/53eef9d1-14df-45aa-ae9b-bc7583066d10-lock\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.135202 master-0 kubenswrapper[27819]: I0319 09:48:30.135136 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r7wt\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-kube-api-access-4r7wt\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.135257 master-0 kubenswrapper[27819]: I0319 09:48:30.135244 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/53eef9d1-14df-45aa-ae9b-bc7583066d10-cache\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.135361 master-0 kubenswrapper[27819]: I0319 09:48:30.135333 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.135576 master-0 kubenswrapper[27819]: E0319 09:48:30.135524 27819 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 09:48:30.135576 master-0 kubenswrapper[27819]: E0319 09:48:30.135574 27819 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 09:48:30.135706 master-0 kubenswrapper[27819]: E0319 09:48:30.135650 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift podName:53eef9d1-14df-45aa-ae9b-bc7583066d10 nodeName:}" failed. No retries permitted until 2026-03-19 09:48:30.635624029 +0000 UTC m=+895.557201731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift") pod "swift-storage-0" (UID: "53eef9d1-14df-45aa-ae9b-bc7583066d10") : configmap "swift-ring-files" not found Mar 19 09:48:30.135991 master-0 kubenswrapper[27819]: I0319 09:48:30.135967 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/53eef9d1-14df-45aa-ae9b-bc7583066d10-lock\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.136371 master-0 kubenswrapper[27819]: I0319 09:48:30.136346 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/53eef9d1-14df-45aa-ae9b-bc7583066d10-cache\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.137888 master-0 kubenswrapper[27819]: I0319 09:48:30.137854 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:48:30.137994 master-0 kubenswrapper[27819]: I0319 09:48:30.137906 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-70ed5fc2-e581-4c2f-8361-5a7572c28a72\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4212fcc3-2815-44a3-8fbe-f65860653551\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/ff00c2ece03a46fe7fe12944938623d16c61b82687bd74bf9bf7b7f44bff6545/globalmount\"" pod="openstack/swift-storage-0" Mar 19 09:48:30.138651 master-0 kubenswrapper[27819]: I0319 09:48:30.138621 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53eef9d1-14df-45aa-ae9b-bc7583066d10-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.172151 master-0 kubenswrapper[27819]: I0319 09:48:30.172021 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r7wt\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-kube-api-access-4r7wt\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.333924 master-0 kubenswrapper[27819]: I0319 09:48:30.331800 27819 generic.go:334] "Generic (PLEG): container finished" podID="48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a" containerID="16d7e745ecf80d32283d79a3110151f972abf05e38483efb25335c84976520f2" exitCode=0 Mar 19 09:48:30.333924 master-0 kubenswrapper[27819]: I0319 09:48:30.331876 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a2c-account-create-update-bhtkr" event={"ID":"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a","Type":"ContainerDied","Data":"16d7e745ecf80d32283d79a3110151f972abf05e38483efb25335c84976520f2"} Mar 19 09:48:30.336689 master-0 kubenswrapper[27819]: I0319 09:48:30.336628 27819 generic.go:334] "Generic (PLEG): container finished" podID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerID="6452eb0b322bf114f26e7e95257237fb6a0f19b13becec0729ceb5f70035d18b" exitCode=0 Mar 19 09:48:30.336872 master-0 kubenswrapper[27819]: I0319 09:48:30.336725 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" event={"ID":"e8f2abdd-185a-42c6-9cb8-1a905b907791","Type":"ContainerDied","Data":"6452eb0b322bf114f26e7e95257237fb6a0f19b13becec0729ceb5f70035d18b"} Mar 19 09:48:30.336872 master-0 kubenswrapper[27819]: I0319 09:48:30.336758 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" event={"ID":"e8f2abdd-185a-42c6-9cb8-1a905b907791","Type":"ContainerStarted","Data":"f77689176406a5a868ecd60d1907ea702aaf58bc228846d2193d7ef133b71f44"} Mar 19 09:48:30.339364 master-0 kubenswrapper[27819]: I0319 09:48:30.339316 27819 generic.go:334] "Generic (PLEG): container finished" podID="ce72d5ce-6259-47ba-a860-7b45dafbbf7a" containerID="b15635387ccd8991450e5027a429ddcaf5450adcd82415a41b41c675d9acf4cb" exitCode=0 Mar 19 09:48:30.339482 master-0 kubenswrapper[27819]: I0319 09:48:30.339368 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-080c-account-create-update-9fljq" event={"ID":"ce72d5ce-6259-47ba-a860-7b45dafbbf7a","Type":"ContainerDied","Data":"b15635387ccd8991450e5027a429ddcaf5450adcd82415a41b41c675d9acf4cb"} Mar 19 09:48:30.345435 master-0 kubenswrapper[27819]: I0319 09:48:30.345390 27819 generic.go:334] "Generic (PLEG): container finished" podID="39e53946-899c-430a-8758-b8f7a30e3897" containerID="7a16d8e44045d02299ec0518ea290b8605d9c2066bf78d9f1ece95319aed7c01" exitCode=0 Mar 19 09:48:30.345638 master-0 kubenswrapper[27819]: I0319 09:48:30.345501 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-swwmg" Mar 19 09:48:30.345794 master-0 kubenswrapper[27819]: I0319 09:48:30.345699 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xnr86" event={"ID":"39e53946-899c-430a-8758-b8f7a30e3897","Type":"ContainerDied","Data":"7a16d8e44045d02299ec0518ea290b8605d9c2066bf78d9f1ece95319aed7c01"} Mar 19 09:48:30.647618 master-0 kubenswrapper[27819]: I0319 09:48:30.647369 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:30.649335 master-0 kubenswrapper[27819]: E0319 09:48:30.648300 27819 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 09:48:30.649335 master-0 kubenswrapper[27819]: E0319 09:48:30.648326 27819 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 09:48:30.649743 master-0 kubenswrapper[27819]: E0319 09:48:30.648377 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift podName:53eef9d1-14df-45aa-ae9b-bc7583066d10 nodeName:}" failed. No retries permitted until 2026-03-19 09:48:31.648359853 +0000 UTC m=+896.569937545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift") pod "swift-storage-0" (UID: "53eef9d1-14df-45aa-ae9b-bc7583066d10") : configmap "swift-ring-files" not found Mar 19 09:48:30.835081 master-0 kubenswrapper[27819]: I0319 09:48:30.833999 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-h4t4p"] Mar 19 09:48:30.835921 master-0 kubenswrapper[27819]: I0319 09:48:30.835881 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:30.843403 master-0 kubenswrapper[27819]: I0319 09:48:30.842742 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 19 09:48:30.843403 master-0 kubenswrapper[27819]: I0319 09:48:30.843027 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 19 09:48:30.843403 master-0 kubenswrapper[27819]: I0319 09:48:30.843183 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 19 09:48:30.887209 master-0 kubenswrapper[27819]: I0319 09:48:30.887157 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: I0319 09:48:30.888749 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-h4t4p"] Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: E0319 09:48:30.889604 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-2zqbl ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-h4t4p" podUID="de024eb5-6cea-4ec6-b369-185f7c082092" Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: I0319 09:48:30.901601 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kmw8s"] Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: E0319 09:48:30.902124 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60be1ce-b91b-497c-935b-2f8d245d6f8f" containerName="mariadb-database-create" Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: I0319 09:48:30.902139 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60be1ce-b91b-497c-935b-2f8d245d6f8f" containerName="mariadb-database-create" Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: I0319 09:48:30.902370 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60be1ce-b91b-497c-935b-2f8d245d6f8f" containerName="mariadb-database-create" Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: I0319 09:48:30.903048 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: I0319 09:48:30.921985 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-h4t4p"] Mar 19 09:48:30.933862 master-0 kubenswrapper[27819]: I0319 09:48:30.922055 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kmw8s"] Mar 19 09:48:30.954916 master-0 kubenswrapper[27819]: I0319 09:48:30.954847 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-scripts\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:30.954916 master-0 kubenswrapper[27819]: I0319 09:48:30.954912 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de024eb5-6cea-4ec6-b369-185f7c082092-etc-swift\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:30.955193 master-0 kubenswrapper[27819]: I0319 09:48:30.955032 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-swiftconf\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:30.955193 master-0 kubenswrapper[27819]: I0319 09:48:30.955076 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-ring-data-devices\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:30.955193 master-0 kubenswrapper[27819]: I0319 09:48:30.955153 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-dispersionconf\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:30.955315 master-0 kubenswrapper[27819]: I0319 09:48:30.955210 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqbl\" (UniqueName: \"kubernetes.io/projected/de024eb5-6cea-4ec6-b369-185f7c082092-kube-api-access-2zqbl\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:30.955315 master-0 kubenswrapper[27819]: I0319 09:48:30.955253 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-combined-ca-bundle\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.050762 master-0 kubenswrapper[27819]: I0319 09:48:31.049665 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-7mpd2"] Mar 19 09:48:31.056019 master-0 kubenswrapper[27819]: I0319 09:48:31.051287 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.056448 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60be1ce-b91b-497c-935b-2f8d245d6f8f-operator-scripts\") pod \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.056819 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4d7v\" (UniqueName: \"kubernetes.io/projected/a60be1ce-b91b-497c-935b-2f8d245d6f8f-kube-api-access-n4d7v\") pod \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\" (UID: \"a60be1ce-b91b-497c-935b-2f8d245d6f8f\") " Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057226 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-ring-data-devices\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057334 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-swiftconf\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057389 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-ring-data-devices\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057420 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-swiftconf\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057447 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-scripts\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057510 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czq6x\" (UniqueName: \"kubernetes.io/projected/533235c2-501c-4388-beef-c19b3f33f733-kube-api-access-czq6x\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057632 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-dispersionconf\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057686 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-combined-ca-bundle\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057721 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/533235c2-501c-4388-beef-c19b3f33f733-etc-swift\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057757 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqbl\" (UniqueName: \"kubernetes.io/projected/de024eb5-6cea-4ec6-b369-185f7c082092-kube-api-access-2zqbl\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057796 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-combined-ca-bundle\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057838 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-scripts\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057863 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de024eb5-6cea-4ec6-b369-185f7c082092-etc-swift\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.057903 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-dispersionconf\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.058582 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a60be1ce-b91b-497c-935b-2f8d245d6f8f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a60be1ce-b91b-497c-935b-2f8d245d6f8f" (UID: "a60be1ce-b91b-497c-935b-2f8d245d6f8f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.062995 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60be1ce-b91b-497c-935b-2f8d245d6f8f-kube-api-access-n4d7v" (OuterVolumeSpecName: "kube-api-access-n4d7v") pod "a60be1ce-b91b-497c-935b-2f8d245d6f8f" (UID: "a60be1ce-b91b-497c-935b-2f8d245d6f8f"). InnerVolumeSpecName "kube-api-access-n4d7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:31.067177 master-0 kubenswrapper[27819]: I0319 09:48:31.066760 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-swiftconf\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.068032 master-0 kubenswrapper[27819]: I0319 09:48:31.067462 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-ring-data-devices\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.068898 master-0 kubenswrapper[27819]: I0319 09:48:31.068862 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-scripts\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.070611 master-0 kubenswrapper[27819]: I0319 09:48:31.070573 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-dispersionconf\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.070942 master-0 kubenswrapper[27819]: I0319 09:48:31.070914 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de024eb5-6cea-4ec6-b369-185f7c082092-etc-swift\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.083563 master-0 kubenswrapper[27819]: I0319 09:48:31.080285 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-combined-ca-bundle\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.083563 master-0 kubenswrapper[27819]: I0319 09:48:31.083054 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7mpd2"] Mar 19 09:48:31.089985 master-0 kubenswrapper[27819]: I0319 09:48:31.086680 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqbl\" (UniqueName: \"kubernetes.io/projected/de024eb5-6cea-4ec6-b369-185f7c082092-kube-api-access-2zqbl\") pod \"swift-ring-rebalance-h4t4p\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.111323 master-0 kubenswrapper[27819]: I0319 09:48:31.111244 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-d9a8-account-create-update-cqgzr"] Mar 19 09:48:31.112661 master-0 kubenswrapper[27819]: I0319 09:48:31.112584 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.115690 master-0 kubenswrapper[27819]: I0319 09:48:31.115659 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 19 09:48:31.129390 master-0 kubenswrapper[27819]: I0319 09:48:31.128462 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d9a8-account-create-update-cqgzr"] Mar 19 09:48:31.168941 master-0 kubenswrapper[27819]: I0319 09:48:31.168874 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czq6x\" (UniqueName: \"kubernetes.io/projected/533235c2-501c-4388-beef-c19b3f33f733-kube-api-access-czq6x\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.169297 master-0 kubenswrapper[27819]: I0319 09:48:31.169279 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gj54\" (UniqueName: \"kubernetes.io/projected/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-kube-api-access-8gj54\") pod \"glance-db-create-7mpd2\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.169678 master-0 kubenswrapper[27819]: I0319 09:48:31.169661 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-combined-ca-bundle\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.169904 master-0 kubenswrapper[27819]: I0319 09:48:31.169869 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/533235c2-501c-4388-beef-c19b3f33f733-etc-swift\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.170086 master-0 kubenswrapper[27819]: I0319 09:48:31.170070 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-dispersionconf\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.170241 master-0 kubenswrapper[27819]: I0319 09:48:31.170227 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-ring-data-devices\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.170383 master-0 kubenswrapper[27819]: I0319 09:48:31.170350 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-operator-scripts\") pod \"glance-db-create-7mpd2\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.170692 master-0 kubenswrapper[27819]: I0319 09:48:31.170677 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-swiftconf\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.170806 master-0 kubenswrapper[27819]: I0319 09:48:31.170793 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-scripts\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.171013 master-0 kubenswrapper[27819]: I0319 09:48:31.171000 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4d7v\" (UniqueName: \"kubernetes.io/projected/a60be1ce-b91b-497c-935b-2f8d245d6f8f-kube-api-access-n4d7v\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.171108 master-0 kubenswrapper[27819]: I0319 09:48:31.171097 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a60be1ce-b91b-497c-935b-2f8d245d6f8f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.171199 master-0 kubenswrapper[27819]: I0319 09:48:31.170715 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/533235c2-501c-4388-beef-c19b3f33f733-etc-swift\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.172079 master-0 kubenswrapper[27819]: I0319 09:48:31.172042 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-scripts\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.172792 master-0 kubenswrapper[27819]: I0319 09:48:31.172764 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-ring-data-devices\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.173457 master-0 kubenswrapper[27819]: I0319 09:48:31.173428 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-combined-ca-bundle\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.174129 master-0 kubenswrapper[27819]: I0319 09:48:31.174097 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-swiftconf\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.174365 master-0 kubenswrapper[27819]: I0319 09:48:31.174339 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-dispersionconf\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.187102 master-0 kubenswrapper[27819]: I0319 09:48:31.187057 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czq6x\" (UniqueName: \"kubernetes.io/projected/533235c2-501c-4388-beef-c19b3f33f733-kube-api-access-czq6x\") pod \"swift-ring-rebalance-kmw8s\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.274033 master-0 kubenswrapper[27819]: I0319 09:48:31.273944 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-operator-scripts\") pod \"glance-db-create-7mpd2\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.274407 master-0 kubenswrapper[27819]: I0319 09:48:31.274378 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48418558-addd-4db9-a62d-177832acc8db-operator-scripts\") pod \"glance-d9a8-account-create-update-cqgzr\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.274621 master-0 kubenswrapper[27819]: I0319 09:48:31.274415 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m87s\" (UniqueName: \"kubernetes.io/projected/48418558-addd-4db9-a62d-177832acc8db-kube-api-access-6m87s\") pod \"glance-d9a8-account-create-update-cqgzr\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.274621 master-0 kubenswrapper[27819]: I0319 09:48:31.274471 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gj54\" (UniqueName: \"kubernetes.io/projected/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-kube-api-access-8gj54\") pod \"glance-db-create-7mpd2\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.274765 master-0 kubenswrapper[27819]: I0319 09:48:31.274745 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-operator-scripts\") pod \"glance-db-create-7mpd2\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.292068 master-0 kubenswrapper[27819]: I0319 09:48:31.291147 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gj54\" (UniqueName: \"kubernetes.io/projected/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-kube-api-access-8gj54\") pod \"glance-db-create-7mpd2\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.298711 master-0 kubenswrapper[27819]: I0319 09:48:31.297654 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f2fdc0-7532-40cd-863d-df34b7b3dd0a" path="/var/lib/kubelet/pods/91f2fdc0-7532-40cd-863d-df34b7b3dd0a/volumes" Mar 19 09:48:31.313197 master-0 kubenswrapper[27819]: I0319 09:48:31.313142 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:31.361274 master-0 kubenswrapper[27819]: I0319 09:48:31.359924 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" event={"ID":"e8f2abdd-185a-42c6-9cb8-1a905b907791","Type":"ContainerStarted","Data":"3ee9584c1f4889f02c39223d39a759e17fb0d166c173d3751dc22c602c027422"} Mar 19 09:48:31.361274 master-0 kubenswrapper[27819]: I0319 09:48:31.360056 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:31.365640 master-0 kubenswrapper[27819]: I0319 09:48:31.362977 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-q4ngh" Mar 19 09:48:31.365640 master-0 kubenswrapper[27819]: I0319 09:48:31.363705 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-q4ngh" event={"ID":"a60be1ce-b91b-497c-935b-2f8d245d6f8f","Type":"ContainerDied","Data":"6a903df27069c3ffcdffe5c209813e61154a2a77e4c91beccbea48d083d5ebc8"} Mar 19 09:48:31.365640 master-0 kubenswrapper[27819]: I0319 09:48:31.363727 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a903df27069c3ffcdffe5c209813e61154a2a77e4c91beccbea48d083d5ebc8" Mar 19 09:48:31.365640 master-0 kubenswrapper[27819]: I0319 09:48:31.363759 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.378114 master-0 kubenswrapper[27819]: I0319 09:48:31.376283 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48418558-addd-4db9-a62d-177832acc8db-operator-scripts\") pod \"glance-d9a8-account-create-update-cqgzr\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.378114 master-0 kubenswrapper[27819]: I0319 09:48:31.376376 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m87s\" (UniqueName: \"kubernetes.io/projected/48418558-addd-4db9-a62d-177832acc8db-kube-api-access-6m87s\") pod \"glance-d9a8-account-create-update-cqgzr\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.378114 master-0 kubenswrapper[27819]: I0319 09:48:31.377850 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48418558-addd-4db9-a62d-177832acc8db-operator-scripts\") pod \"glance-d9a8-account-create-update-cqgzr\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.396930 master-0 kubenswrapper[27819]: I0319 09:48:31.396349 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:31.407038 master-0 kubenswrapper[27819]: I0319 09:48:31.404956 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m87s\" (UniqueName: \"kubernetes.io/projected/48418558-addd-4db9-a62d-177832acc8db-kube-api-access-6m87s\") pod \"glance-d9a8-account-create-update-cqgzr\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.417955 master-0 kubenswrapper[27819]: I0319 09:48:31.417892 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" podStartSLOduration=4.417872144 podStartE2EDuration="4.417872144s" podCreationTimestamp="2026-03-19 09:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:48:31.395653538 +0000 UTC m=+896.317231240" watchObservedRunningTime="2026-03-19 09:48:31.417872144 +0000 UTC m=+896.339449836" Mar 19 09:48:31.436747 master-0 kubenswrapper[27819]: I0319 09:48:31.436574 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:31.445287 master-0 kubenswrapper[27819]: I0319 09:48:31.445242 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:31.571652 master-0 kubenswrapper[27819]: I0319 09:48:31.571577 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-70ed5fc2-e581-4c2f-8361-5a7572c28a72\" (UniqueName: \"kubernetes.io/csi/topolvm.io^4212fcc3-2815-44a3-8fbe-f65860653551\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:31.587403 master-0 kubenswrapper[27819]: I0319 09:48:31.587353 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-combined-ca-bundle\") pod \"de024eb5-6cea-4ec6-b369-185f7c082092\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " Mar 19 09:48:31.587527 master-0 kubenswrapper[27819]: I0319 09:48:31.587424 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-scripts\") pod \"de024eb5-6cea-4ec6-b369-185f7c082092\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " Mar 19 09:48:31.587527 master-0 kubenswrapper[27819]: I0319 09:48:31.587486 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zqbl\" (UniqueName: \"kubernetes.io/projected/de024eb5-6cea-4ec6-b369-185f7c082092-kube-api-access-2zqbl\") pod \"de024eb5-6cea-4ec6-b369-185f7c082092\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " Mar 19 09:48:31.587718 master-0 kubenswrapper[27819]: I0319 09:48:31.587658 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-ring-data-devices\") pod \"de024eb5-6cea-4ec6-b369-185f7c082092\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " Mar 19 09:48:31.587941 master-0 kubenswrapper[27819]: I0319 09:48:31.587767 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de024eb5-6cea-4ec6-b369-185f7c082092-etc-swift\") pod \"de024eb5-6cea-4ec6-b369-185f7c082092\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " Mar 19 09:48:31.587941 master-0 kubenswrapper[27819]: I0319 09:48:31.587845 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-dispersionconf\") pod \"de024eb5-6cea-4ec6-b369-185f7c082092\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " Mar 19 09:48:31.587941 master-0 kubenswrapper[27819]: I0319 09:48:31.587874 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-swiftconf\") pod \"de024eb5-6cea-4ec6-b369-185f7c082092\" (UID: \"de024eb5-6cea-4ec6-b369-185f7c082092\") " Mar 19 09:48:31.589203 master-0 kubenswrapper[27819]: I0319 09:48:31.589177 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-scripts" (OuterVolumeSpecName: "scripts") pod "de024eb5-6cea-4ec6-b369-185f7c082092" (UID: "de024eb5-6cea-4ec6-b369-185f7c082092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:31.589461 master-0 kubenswrapper[27819]: I0319 09:48:31.589438 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de024eb5-6cea-4ec6-b369-185f7c082092-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "de024eb5-6cea-4ec6-b369-185f7c082092" (UID: "de024eb5-6cea-4ec6-b369-185f7c082092"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:48:31.590014 master-0 kubenswrapper[27819]: I0319 09:48:31.589959 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "de024eb5-6cea-4ec6-b369-185f7c082092" (UID: "de024eb5-6cea-4ec6-b369-185f7c082092"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:31.592242 master-0 kubenswrapper[27819]: I0319 09:48:31.591653 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "de024eb5-6cea-4ec6-b369-185f7c082092" (UID: "de024eb5-6cea-4ec6-b369-185f7c082092"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:48:31.592242 master-0 kubenswrapper[27819]: I0319 09:48:31.591900 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "de024eb5-6cea-4ec6-b369-185f7c082092" (UID: "de024eb5-6cea-4ec6-b369-185f7c082092"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:48:31.592826 master-0 kubenswrapper[27819]: I0319 09:48:31.592735 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de024eb5-6cea-4ec6-b369-185f7c082092" (UID: "de024eb5-6cea-4ec6-b369-185f7c082092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:48:31.603979 master-0 kubenswrapper[27819]: I0319 09:48:31.603697 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de024eb5-6cea-4ec6-b369-185f7c082092-kube-api-access-2zqbl" (OuterVolumeSpecName: "kube-api-access-2zqbl") pod "de024eb5-6cea-4ec6-b369-185f7c082092" (UID: "de024eb5-6cea-4ec6-b369-185f7c082092"). InnerVolumeSpecName "kube-api-access-2zqbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:31.689944 master-0 kubenswrapper[27819]: I0319 09:48:31.689883 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: I0319 09:48:31.690067 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: I0319 09:48:31.690081 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: I0319 09:48:31.690091 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zqbl\" (UniqueName: \"kubernetes.io/projected/de024eb5-6cea-4ec6-b369-185f7c082092-kube-api-access-2zqbl\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: I0319 09:48:31.690101 27819 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/de024eb5-6cea-4ec6-b369-185f7c082092-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: I0319 09:48:31.690109 27819 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/de024eb5-6cea-4ec6-b369-185f7c082092-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: I0319 09:48:31.690119 27819 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: I0319 09:48:31.690128 27819 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/de024eb5-6cea-4ec6-b369-185f7c082092-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: E0319 09:48:31.690233 27819 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: E0319 09:48:31.690282 27819 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 09:48:31.690475 master-0 kubenswrapper[27819]: E0319 09:48:31.690325 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift podName:53eef9d1-14df-45aa-ae9b-bc7583066d10 nodeName:}" failed. No retries permitted until 2026-03-19 09:48:33.690311379 +0000 UTC m=+898.611889071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift") pod "swift-storage-0" (UID: "53eef9d1-14df-45aa-ae9b-bc7583066d10") : configmap "swift-ring-files" not found Mar 19 09:48:32.285572 master-0 kubenswrapper[27819]: I0319 09:48:32.281698 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:32.292496 master-0 kubenswrapper[27819]: I0319 09:48:32.292455 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xnr86" Mar 19 09:48:32.310909 master-0 kubenswrapper[27819]: I0319 09:48:32.303824 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:32.310909 master-0 kubenswrapper[27819]: I0319 09:48:32.309018 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kmw8s"] Mar 19 09:48:32.400341 master-0 kubenswrapper[27819]: I0319 09:48:32.399664 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmw8s" event={"ID":"533235c2-501c-4388-beef-c19b3f33f733","Type":"ContainerStarted","Data":"f238fb1aabcf73d0bb7809ad8d7ba4b97b476cccc8bb47defb9ffc7e998c0551"} Mar 19 09:48:32.400950 master-0 kubenswrapper[27819]: I0319 09:48:32.400881 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xnr86" event={"ID":"39e53946-899c-430a-8758-b8f7a30e3897","Type":"ContainerDied","Data":"e4f08c2085adeecd97daec059834b4714eee5db060d2065f368d4bddea9b98aa"} Mar 19 09:48:32.400950 master-0 kubenswrapper[27819]: I0319 09:48:32.400909 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f08c2085adeecd97daec059834b4714eee5db060d2065f368d4bddea9b98aa" Mar 19 09:48:32.401045 master-0 kubenswrapper[27819]: I0319 09:48:32.400961 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xnr86" Mar 19 09:48:32.402162 master-0 kubenswrapper[27819]: I0319 09:48:32.402129 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-7mpd2"] Mar 19 09:48:32.403305 master-0 kubenswrapper[27819]: I0319 09:48:32.403049 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6a2c-account-create-update-bhtkr" event={"ID":"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a","Type":"ContainerDied","Data":"014d4ded5662e57dc479392a2692e8dc60af6c84bc69d3c3ebb3aaf10ff593e4"} Mar 19 09:48:32.403305 master-0 kubenswrapper[27819]: I0319 09:48:32.403079 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014d4ded5662e57dc479392a2692e8dc60af6c84bc69d3c3ebb3aaf10ff593e4" Mar 19 09:48:32.403305 master-0 kubenswrapper[27819]: I0319 09:48:32.403135 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6a2c-account-create-update-bhtkr" Mar 19 09:48:32.404755 master-0 kubenswrapper[27819]: I0319 09:48:32.404736 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-operator-scripts\") pod \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " Mar 19 09:48:32.405391 master-0 kubenswrapper[27819]: I0319 09:48:32.405256 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce72d5ce-6259-47ba-a860-7b45dafbbf7a" (UID: "ce72d5ce-6259-47ba-a860-7b45dafbbf7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:32.405716 master-0 kubenswrapper[27819]: I0319 09:48:32.405697 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e53946-899c-430a-8758-b8f7a30e3897-operator-scripts\") pod \"39e53946-899c-430a-8758-b8f7a30e3897\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " Mar 19 09:48:32.406331 master-0 kubenswrapper[27819]: I0319 09:48:32.406211 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e53946-899c-430a-8758-b8f7a30e3897-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39e53946-899c-430a-8758-b8f7a30e3897" (UID: "39e53946-899c-430a-8758-b8f7a30e3897"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:32.406490 master-0 kubenswrapper[27819]: I0319 09:48:32.406476 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-operator-scripts\") pod \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " Mar 19 09:48:32.406775 master-0 kubenswrapper[27819]: I0319 09:48:32.406757 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmd29\" (UniqueName: \"kubernetes.io/projected/39e53946-899c-430a-8758-b8f7a30e3897-kube-api-access-fmd29\") pod \"39e53946-899c-430a-8758-b8f7a30e3897\" (UID: \"39e53946-899c-430a-8758-b8f7a30e3897\") " Mar 19 09:48:32.406989 master-0 kubenswrapper[27819]: I0319 09:48:32.406956 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-080c-account-create-update-9fljq" Mar 19 09:48:32.407037 master-0 kubenswrapper[27819]: I0319 09:48:32.406900 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-080c-account-create-update-9fljq" event={"ID":"ce72d5ce-6259-47ba-a860-7b45dafbbf7a","Type":"ContainerDied","Data":"6a9fb0513a5bc633959e6201c7c389d662773e5a723302e2552ce3055c9ab6ed"} Mar 19 09:48:32.407037 master-0 kubenswrapper[27819]: I0319 09:48:32.407018 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-h4t4p" Mar 19 09:48:32.407037 master-0 kubenswrapper[27819]: I0319 09:48:32.407013 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a" (UID: "48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:32.407123 master-0 kubenswrapper[27819]: I0319 09:48:32.407061 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9fb0513a5bc633959e6201c7c389d662773e5a723302e2552ce3055c9ab6ed" Mar 19 09:48:32.407215 master-0 kubenswrapper[27819]: I0319 09:48:32.407199 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdmdg\" (UniqueName: \"kubernetes.io/projected/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-kube-api-access-hdmdg\") pod \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\" (UID: \"48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a\") " Mar 19 09:48:32.409235 master-0 kubenswrapper[27819]: W0319 09:48:32.409180 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c5649b3_75ac_4ffb_8aa8_d4bd877fdfc5.slice/crio-2c49fcec2d437527700002b9af30bdbbebbb624cd75ea1e35b2112b853510af6 WatchSource:0}: Error finding container 2c49fcec2d437527700002b9af30bdbbebbb624cd75ea1e35b2112b853510af6: Status 404 returned error can't find the container with id 2c49fcec2d437527700002b9af30bdbbebbb624cd75ea1e35b2112b853510af6 Mar 19 09:48:32.409941 master-0 kubenswrapper[27819]: I0319 09:48:32.409886 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e53946-899c-430a-8758-b8f7a30e3897-kube-api-access-fmd29" (OuterVolumeSpecName: "kube-api-access-fmd29") pod "39e53946-899c-430a-8758-b8f7a30e3897" (UID: "39e53946-899c-430a-8758-b8f7a30e3897"). InnerVolumeSpecName "kube-api-access-fmd29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:32.412447 master-0 kubenswrapper[27819]: I0319 09:48:32.412363 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-kube-api-access-hdmdg" (OuterVolumeSpecName: "kube-api-access-hdmdg") pod "48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a" (UID: "48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a"). InnerVolumeSpecName "kube-api-access-hdmdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:32.415973 master-0 kubenswrapper[27819]: I0319 09:48:32.415944 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh5g7\" (UniqueName: \"kubernetes.io/projected/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-kube-api-access-vh5g7\") pod \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\" (UID: \"ce72d5ce-6259-47ba-a860-7b45dafbbf7a\") " Mar 19 09:48:32.417386 master-0 kubenswrapper[27819]: I0319 09:48:32.417365 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e53946-899c-430a-8758-b8f7a30e3897-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:32.417582 master-0 kubenswrapper[27819]: I0319 09:48:32.417526 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:32.417679 master-0 kubenswrapper[27819]: I0319 09:48:32.417665 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmd29\" (UniqueName: \"kubernetes.io/projected/39e53946-899c-430a-8758-b8f7a30e3897-kube-api-access-fmd29\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:32.417747 master-0 kubenswrapper[27819]: I0319 09:48:32.417734 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdmdg\" (UniqueName: \"kubernetes.io/projected/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a-kube-api-access-hdmdg\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:32.417809 master-0 kubenswrapper[27819]: I0319 09:48:32.417798 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:32.418579 master-0 kubenswrapper[27819]: I0319 09:48:32.418536 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-kube-api-access-vh5g7" (OuterVolumeSpecName: "kube-api-access-vh5g7") pod "ce72d5ce-6259-47ba-a860-7b45dafbbf7a" (UID: "ce72d5ce-6259-47ba-a860-7b45dafbbf7a"). InnerVolumeSpecName "kube-api-access-vh5g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:32.506719 master-0 kubenswrapper[27819]: I0319 09:48:32.506072 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-d9a8-account-create-update-cqgzr"] Mar 19 09:48:32.536574 master-0 kubenswrapper[27819]: I0319 09:48:32.533468 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh5g7\" (UniqueName: \"kubernetes.io/projected/ce72d5ce-6259-47ba-a860-7b45dafbbf7a-kube-api-access-vh5g7\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:32.555565 master-0 kubenswrapper[27819]: I0319 09:48:32.553164 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-h4t4p"] Mar 19 09:48:32.565032 master-0 kubenswrapper[27819]: I0319 09:48:32.564965 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-h4t4p"] Mar 19 09:48:32.996477 master-0 kubenswrapper[27819]: E0319 09:48:32.996404 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48418558_addd_4db9_a62d_177832acc8db.slice/crio-9b4dc71f83dabae1193cc17268f241173af34354bb14adfd68447e5040782063.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c5649b3_75ac_4ffb_8aa8_d4bd877fdfc5.slice/crio-conmon-e67e0b336cdbe58fcf98005551a867032fe99fca7066309b489ee6a847e55cc3.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:48:33.297085 master-0 kubenswrapper[27819]: I0319 09:48:33.296988 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de024eb5-6cea-4ec6-b369-185f7c082092" path="/var/lib/kubelet/pods/de024eb5-6cea-4ec6-b369-185f7c082092/volumes" Mar 19 09:48:33.451713 master-0 kubenswrapper[27819]: I0319 09:48:33.451662 27819 generic.go:334] "Generic (PLEG): container finished" podID="48418558-addd-4db9-a62d-177832acc8db" containerID="9b4dc71f83dabae1193cc17268f241173af34354bb14adfd68447e5040782063" exitCode=0 Mar 19 09:48:33.452101 master-0 kubenswrapper[27819]: I0319 09:48:33.452061 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d9a8-account-create-update-cqgzr" event={"ID":"48418558-addd-4db9-a62d-177832acc8db","Type":"ContainerDied","Data":"9b4dc71f83dabae1193cc17268f241173af34354bb14adfd68447e5040782063"} Mar 19 09:48:33.452209 master-0 kubenswrapper[27819]: I0319 09:48:33.452195 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d9a8-account-create-update-cqgzr" event={"ID":"48418558-addd-4db9-a62d-177832acc8db","Type":"ContainerStarted","Data":"954edb1b94289028528e4e6fd42810682cbb2a5762e0f4ff99b55e6140e21ee1"} Mar 19 09:48:33.471756 master-0 kubenswrapper[27819]: I0319 09:48:33.469843 27819 generic.go:334] "Generic (PLEG): container finished" podID="6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5" containerID="e67e0b336cdbe58fcf98005551a867032fe99fca7066309b489ee6a847e55cc3" exitCode=0 Mar 19 09:48:33.471756 master-0 kubenswrapper[27819]: I0319 09:48:33.469930 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7mpd2" event={"ID":"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5","Type":"ContainerDied","Data":"e67e0b336cdbe58fcf98005551a867032fe99fca7066309b489ee6a847e55cc3"} Mar 19 09:48:33.471756 master-0 kubenswrapper[27819]: I0319 09:48:33.470397 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7mpd2" event={"ID":"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5","Type":"ContainerStarted","Data":"2c49fcec2d437527700002b9af30bdbbebbb624cd75ea1e35b2112b853510af6"} Mar 19 09:48:33.780609 master-0 kubenswrapper[27819]: I0319 09:48:33.779116 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:33.780609 master-0 kubenswrapper[27819]: E0319 09:48:33.779395 27819 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 09:48:33.780609 master-0 kubenswrapper[27819]: E0319 09:48:33.779443 27819 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 09:48:33.780609 master-0 kubenswrapper[27819]: E0319 09:48:33.779512 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift podName:53eef9d1-14df-45aa-ae9b-bc7583066d10 nodeName:}" failed. No retries permitted until 2026-03-19 09:48:37.779490547 +0000 UTC m=+902.701068239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift") pod "swift-storage-0" (UID: "53eef9d1-14df-45aa-ae9b-bc7583066d10") : configmap "swift-ring-files" not found Mar 19 09:48:33.890569 master-0 kubenswrapper[27819]: I0319 09:48:33.890496 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-swwmg"] Mar 19 09:48:33.906483 master-0 kubenswrapper[27819]: I0319 09:48:33.906440 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-swwmg"] Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.000174 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tb9n2"] Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: E0319 09:48:34.000642 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a" containerName="mariadb-account-create-update" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.000657 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a" containerName="mariadb-account-create-update" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: E0319 09:48:34.000689 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce72d5ce-6259-47ba-a860-7b45dafbbf7a" containerName="mariadb-account-create-update" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.000697 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce72d5ce-6259-47ba-a860-7b45dafbbf7a" containerName="mariadb-account-create-update" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: E0319 09:48:34.000734 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e53946-899c-430a-8758-b8f7a30e3897" containerName="mariadb-database-create" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.000740 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e53946-899c-430a-8758-b8f7a30e3897" containerName="mariadb-database-create" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.000989 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e53946-899c-430a-8758-b8f7a30e3897" containerName="mariadb-database-create" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.001002 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a" containerName="mariadb-account-create-update" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.001019 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce72d5ce-6259-47ba-a860-7b45dafbbf7a" containerName="mariadb-account-create-update" Mar 19 09:48:34.002305 master-0 kubenswrapper[27819]: I0319 09:48:34.001699 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:34.006558 master-0 kubenswrapper[27819]: I0319 09:48:34.006301 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 19 09:48:34.024770 master-0 kubenswrapper[27819]: I0319 09:48:34.018386 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tb9n2"] Mar 19 09:48:34.086732 master-0 kubenswrapper[27819]: I0319 09:48:34.086650 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2wz\" (UniqueName: \"kubernetes.io/projected/5a60df7e-6a63-4559-94ac-e793561673f1-kube-api-access-rj2wz\") pod \"root-account-create-update-tb9n2\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:34.086958 master-0 kubenswrapper[27819]: I0319 09:48:34.086811 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a60df7e-6a63-4559-94ac-e793561673f1-operator-scripts\") pod \"root-account-create-update-tb9n2\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:34.189900 master-0 kubenswrapper[27819]: I0319 09:48:34.189848 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a60df7e-6a63-4559-94ac-e793561673f1-operator-scripts\") pod \"root-account-create-update-tb9n2\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:34.190282 master-0 kubenswrapper[27819]: I0319 09:48:34.190263 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2wz\" (UniqueName: \"kubernetes.io/projected/5a60df7e-6a63-4559-94ac-e793561673f1-kube-api-access-rj2wz\") pod \"root-account-create-update-tb9n2\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:34.191581 master-0 kubenswrapper[27819]: I0319 09:48:34.191490 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a60df7e-6a63-4559-94ac-e793561673f1-operator-scripts\") pod \"root-account-create-update-tb9n2\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:34.207916 master-0 kubenswrapper[27819]: I0319 09:48:34.207867 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2wz\" (UniqueName: \"kubernetes.io/projected/5a60df7e-6a63-4559-94ac-e793561673f1-kube-api-access-rj2wz\") pod \"root-account-create-update-tb9n2\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:34.337328 master-0 kubenswrapper[27819]: I0319 09:48:34.336873 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:35.303693 master-0 kubenswrapper[27819]: I0319 09:48:35.303633 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6298d46-2954-436b-aed1-cd9c0f7e91e4" path="/var/lib/kubelet/pods/b6298d46-2954-436b-aed1-cd9c0f7e91e4/volumes" Mar 19 09:48:36.183431 master-0 kubenswrapper[27819]: I0319 09:48:36.183392 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:36.198740 master-0 kubenswrapper[27819]: I0319 09:48:36.198698 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:36.342051 master-0 kubenswrapper[27819]: I0319 09:48:36.340613 27819 trace.go:236] Trace[1415921064]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (19-Mar-2026 09:48:35.270) (total time: 1070ms): Mar 19 09:48:36.342051 master-0 kubenswrapper[27819]: Trace[1415921064]: [1.070481725s] [1.070481725s] END Mar 19 09:48:36.344582 master-0 kubenswrapper[27819]: I0319 09:48:36.344254 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-operator-scripts\") pod \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " Mar 19 09:48:36.344582 master-0 kubenswrapper[27819]: I0319 09:48:36.344432 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48418558-addd-4db9-a62d-177832acc8db-operator-scripts\") pod \"48418558-addd-4db9-a62d-177832acc8db\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " Mar 19 09:48:36.344582 master-0 kubenswrapper[27819]: I0319 09:48:36.344468 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gj54\" (UniqueName: \"kubernetes.io/projected/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-kube-api-access-8gj54\") pod \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\" (UID: \"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5\") " Mar 19 09:48:36.344582 master-0 kubenswrapper[27819]: I0319 09:48:36.344504 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m87s\" (UniqueName: \"kubernetes.io/projected/48418558-addd-4db9-a62d-177832acc8db-kube-api-access-6m87s\") pod \"48418558-addd-4db9-a62d-177832acc8db\" (UID: \"48418558-addd-4db9-a62d-177832acc8db\") " Mar 19 09:48:36.346379 master-0 kubenswrapper[27819]: I0319 09:48:36.346084 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5" (UID: "6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:36.348237 master-0 kubenswrapper[27819]: I0319 09:48:36.346824 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48418558-addd-4db9-a62d-177832acc8db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48418558-addd-4db9-a62d-177832acc8db" (UID: "48418558-addd-4db9-a62d-177832acc8db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:36.348999 master-0 kubenswrapper[27819]: I0319 09:48:36.348977 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48418558-addd-4db9-a62d-177832acc8db-kube-api-access-6m87s" (OuterVolumeSpecName: "kube-api-access-6m87s") pod "48418558-addd-4db9-a62d-177832acc8db" (UID: "48418558-addd-4db9-a62d-177832acc8db"). InnerVolumeSpecName "kube-api-access-6m87s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:36.351969 master-0 kubenswrapper[27819]: I0319 09:48:36.351907 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-kube-api-access-8gj54" (OuterVolumeSpecName: "kube-api-access-8gj54") pod "6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5" (UID: "6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5"). InnerVolumeSpecName "kube-api-access-8gj54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:36.447956 master-0 kubenswrapper[27819]: I0319 09:48:36.447849 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48418558-addd-4db9-a62d-177832acc8db-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:36.447956 master-0 kubenswrapper[27819]: I0319 09:48:36.447886 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gj54\" (UniqueName: \"kubernetes.io/projected/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-kube-api-access-8gj54\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:36.447956 master-0 kubenswrapper[27819]: I0319 09:48:36.447896 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m87s\" (UniqueName: \"kubernetes.io/projected/48418558-addd-4db9-a62d-177832acc8db-kube-api-access-6m87s\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:36.447956 master-0 kubenswrapper[27819]: I0319 09:48:36.447905 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:36.512195 master-0 kubenswrapper[27819]: I0319 09:48:36.512136 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmw8s" event={"ID":"533235c2-501c-4388-beef-c19b3f33f733","Type":"ContainerStarted","Data":"155401e1ef0126aa2dd49b154e4f4e970cf6169ce19684bdfdd5557211f4b695"} Mar 19 09:48:36.526316 master-0 kubenswrapper[27819]: I0319 09:48:36.526240 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tb9n2"] Mar 19 09:48:36.533945 master-0 kubenswrapper[27819]: I0319 09:48:36.533885 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-d9a8-account-create-update-cqgzr" Mar 19 09:48:36.534872 master-0 kubenswrapper[27819]: I0319 09:48:36.534809 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-d9a8-account-create-update-cqgzr" event={"ID":"48418558-addd-4db9-a62d-177832acc8db","Type":"ContainerDied","Data":"954edb1b94289028528e4e6fd42810682cbb2a5762e0f4ff99b55e6140e21ee1"} Mar 19 09:48:36.534952 master-0 kubenswrapper[27819]: I0319 09:48:36.534882 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="954edb1b94289028528e4e6fd42810682cbb2a5762e0f4ff99b55e6140e21ee1" Mar 19 09:48:36.537168 master-0 kubenswrapper[27819]: I0319 09:48:36.537096 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-7mpd2" event={"ID":"6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5","Type":"ContainerDied","Data":"2c49fcec2d437527700002b9af30bdbbebbb624cd75ea1e35b2112b853510af6"} Mar 19 09:48:36.537168 master-0 kubenswrapper[27819]: I0319 09:48:36.537164 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c49fcec2d437527700002b9af30bdbbebbb624cd75ea1e35b2112b853510af6" Mar 19 09:48:36.537348 master-0 kubenswrapper[27819]: I0319 09:48:36.537276 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-7mpd2" Mar 19 09:48:36.538131 master-0 kubenswrapper[27819]: W0319 09:48:36.537960 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a60df7e_6a63_4559_94ac_e793561673f1.slice/crio-5104eb9449c5386fdf5e35f0b1fe71a8575cdae3e5f2d9c6755c6d35038bf00d WatchSource:0}: Error finding container 5104eb9449c5386fdf5e35f0b1fe71a8575cdae3e5f2d9c6755c6d35038bf00d: Status 404 returned error can't find the container with id 5104eb9449c5386fdf5e35f0b1fe71a8575cdae3e5f2d9c6755c6d35038bf00d Mar 19 09:48:36.551146 master-0 kubenswrapper[27819]: I0319 09:48:36.551076 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 19 09:48:36.558680 master-0 kubenswrapper[27819]: I0319 09:48:36.557665 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kmw8s" podStartSLOduration=2.629100519 podStartE2EDuration="6.557644876s" podCreationTimestamp="2026-03-19 09:48:30 +0000 UTC" firstStartedPulling="2026-03-19 09:48:32.293935363 +0000 UTC m=+897.215513055" lastFinishedPulling="2026-03-19 09:48:36.22247972 +0000 UTC m=+901.144057412" observedRunningTime="2026-03-19 09:48:36.542461332 +0000 UTC m=+901.464039024" watchObservedRunningTime="2026-03-19 09:48:36.557644876 +0000 UTC m=+901.479222568" Mar 19 09:48:37.547252 master-0 kubenswrapper[27819]: I0319 09:48:37.547189 27819 generic.go:334] "Generic (PLEG): container finished" podID="5a60df7e-6a63-4559-94ac-e793561673f1" containerID="ae0636da7880ae75a618ed37816678563df55b2ac15bec4865b8f95feccae8d5" exitCode=0 Mar 19 09:48:37.547252 master-0 kubenswrapper[27819]: I0319 09:48:37.547242 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tb9n2" event={"ID":"5a60df7e-6a63-4559-94ac-e793561673f1","Type":"ContainerDied","Data":"ae0636da7880ae75a618ed37816678563df55b2ac15bec4865b8f95feccae8d5"} Mar 19 09:48:37.547834 master-0 kubenswrapper[27819]: I0319 09:48:37.547279 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tb9n2" event={"ID":"5a60df7e-6a63-4559-94ac-e793561673f1","Type":"ContainerStarted","Data":"5104eb9449c5386fdf5e35f0b1fe71a8575cdae3e5f2d9c6755c6d35038bf00d"} Mar 19 09:48:37.880573 master-0 kubenswrapper[27819]: I0319 09:48:37.880297 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:37.880573 master-0 kubenswrapper[27819]: E0319 09:48:37.880568 27819 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 19 09:48:37.880573 master-0 kubenswrapper[27819]: E0319 09:48:37.880584 27819 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 19 09:48:37.880882 master-0 kubenswrapper[27819]: E0319 09:48:37.880632 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift podName:53eef9d1-14df-45aa-ae9b-bc7583066d10 nodeName:}" failed. No retries permitted until 2026-03-19 09:48:45.880614613 +0000 UTC m=+910.802192305 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift") pod "swift-storage-0" (UID: "53eef9d1-14df-45aa-ae9b-bc7583066d10") : configmap "swift-ring-files" not found Mar 19 09:48:38.327892 master-0 kubenswrapper[27819]: I0319 09:48:38.327726 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:48:38.441505 master-0 kubenswrapper[27819]: I0319 09:48:38.436189 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv"] Mar 19 09:48:38.441505 master-0 kubenswrapper[27819]: I0319 09:48:38.436529 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" podUID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerName="dnsmasq-dns" containerID="cri-o://638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5" gracePeriod=10 Mar 19 09:48:39.260328 master-0 kubenswrapper[27819]: I0319 09:48:39.260292 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:39.286191 master-0 kubenswrapper[27819]: I0319 09:48:39.286145 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:48:39.345630 master-0 kubenswrapper[27819]: I0319 09:48:39.345583 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2wz\" (UniqueName: \"kubernetes.io/projected/5a60df7e-6a63-4559-94ac-e793561673f1-kube-api-access-rj2wz\") pod \"5a60df7e-6a63-4559-94ac-e793561673f1\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " Mar 19 09:48:39.345814 master-0 kubenswrapper[27819]: I0319 09:48:39.345676 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdrk\" (UniqueName: \"kubernetes.io/projected/073d89ae-4524-4fda-87d1-bdd81ef69236-kube-api-access-jvdrk\") pod \"073d89ae-4524-4fda-87d1-bdd81ef69236\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " Mar 19 09:48:39.345814 master-0 kubenswrapper[27819]: I0319 09:48:39.345714 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-config\") pod \"073d89ae-4524-4fda-87d1-bdd81ef69236\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " Mar 19 09:48:39.345949 master-0 kubenswrapper[27819]: I0319 09:48:39.345887 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a60df7e-6a63-4559-94ac-e793561673f1-operator-scripts\") pod \"5a60df7e-6a63-4559-94ac-e793561673f1\" (UID: \"5a60df7e-6a63-4559-94ac-e793561673f1\") " Mar 19 09:48:39.346006 master-0 kubenswrapper[27819]: I0319 09:48:39.345964 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-dns-svc\") pod \"073d89ae-4524-4fda-87d1-bdd81ef69236\" (UID: \"073d89ae-4524-4fda-87d1-bdd81ef69236\") " Mar 19 09:48:39.347325 master-0 kubenswrapper[27819]: I0319 09:48:39.347295 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a60df7e-6a63-4559-94ac-e793561673f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a60df7e-6a63-4559-94ac-e793561673f1" (UID: "5a60df7e-6a63-4559-94ac-e793561673f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:39.350061 master-0 kubenswrapper[27819]: I0319 09:48:39.350007 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a60df7e-6a63-4559-94ac-e793561673f1-kube-api-access-rj2wz" (OuterVolumeSpecName: "kube-api-access-rj2wz") pod "5a60df7e-6a63-4559-94ac-e793561673f1" (UID: "5a60df7e-6a63-4559-94ac-e793561673f1"). InnerVolumeSpecName "kube-api-access-rj2wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:39.350648 master-0 kubenswrapper[27819]: I0319 09:48:39.350577 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073d89ae-4524-4fda-87d1-bdd81ef69236-kube-api-access-jvdrk" (OuterVolumeSpecName: "kube-api-access-jvdrk") pod "073d89ae-4524-4fda-87d1-bdd81ef69236" (UID: "073d89ae-4524-4fda-87d1-bdd81ef69236"). InnerVolumeSpecName "kube-api-access-jvdrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:39.403290 master-0 kubenswrapper[27819]: I0319 09:48:39.403231 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-config" (OuterVolumeSpecName: "config") pod "073d89ae-4524-4fda-87d1-bdd81ef69236" (UID: "073d89ae-4524-4fda-87d1-bdd81ef69236"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:39.405186 master-0 kubenswrapper[27819]: I0319 09:48:39.405099 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "073d89ae-4524-4fda-87d1-bdd81ef69236" (UID: "073d89ae-4524-4fda-87d1-bdd81ef69236"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:39.468488 master-0 kubenswrapper[27819]: I0319 09:48:39.449863 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2wz\" (UniqueName: \"kubernetes.io/projected/5a60df7e-6a63-4559-94ac-e793561673f1-kube-api-access-rj2wz\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:39.468488 master-0 kubenswrapper[27819]: I0319 09:48:39.449908 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdrk\" (UniqueName: \"kubernetes.io/projected/073d89ae-4524-4fda-87d1-bdd81ef69236-kube-api-access-jvdrk\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:39.468488 master-0 kubenswrapper[27819]: I0319 09:48:39.449919 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:39.468488 master-0 kubenswrapper[27819]: I0319 09:48:39.449930 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a60df7e-6a63-4559-94ac-e793561673f1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:39.468488 master-0 kubenswrapper[27819]: I0319 09:48:39.449939 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/073d89ae-4524-4fda-87d1-bdd81ef69236-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:39.572123 master-0 kubenswrapper[27819]: I0319 09:48:39.572083 27819 generic.go:334] "Generic (PLEG): container finished" podID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerID="638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5" exitCode=0 Mar 19 09:48:39.572381 master-0 kubenswrapper[27819]: I0319 09:48:39.572361 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" event={"ID":"073d89ae-4524-4fda-87d1-bdd81ef69236","Type":"ContainerDied","Data":"638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5"} Mar 19 09:48:39.572465 master-0 kubenswrapper[27819]: I0319 09:48:39.572453 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" event={"ID":"073d89ae-4524-4fda-87d1-bdd81ef69236","Type":"ContainerDied","Data":"a929db7f5a708c25b8dbee87c6dc53bf4684978b1b167295cc0dce049d47140d"} Mar 19 09:48:39.572569 master-0 kubenswrapper[27819]: I0319 09:48:39.572557 27819 scope.go:117] "RemoveContainer" containerID="638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5" Mar 19 09:48:39.572740 master-0 kubenswrapper[27819]: I0319 09:48:39.572726 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv" Mar 19 09:48:39.576835 master-0 kubenswrapper[27819]: I0319 09:48:39.576803 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tb9n2" event={"ID":"5a60df7e-6a63-4559-94ac-e793561673f1","Type":"ContainerDied","Data":"5104eb9449c5386fdf5e35f0b1fe71a8575cdae3e5f2d9c6755c6d35038bf00d"} Mar 19 09:48:39.576914 master-0 kubenswrapper[27819]: I0319 09:48:39.576838 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5104eb9449c5386fdf5e35f0b1fe71a8575cdae3e5f2d9c6755c6d35038bf00d" Mar 19 09:48:39.576914 master-0 kubenswrapper[27819]: I0319 09:48:39.576880 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tb9n2" Mar 19 09:48:39.620746 master-0 kubenswrapper[27819]: I0319 09:48:39.620696 27819 scope.go:117] "RemoveContainer" containerID="1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496" Mar 19 09:48:39.623317 master-0 kubenswrapper[27819]: I0319 09:48:39.623282 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv"] Mar 19 09:48:39.642482 master-0 kubenswrapper[27819]: I0319 09:48:39.642239 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6ff8fd9d5c-bsrqv"] Mar 19 09:48:39.644729 master-0 kubenswrapper[27819]: I0319 09:48:39.644691 27819 scope.go:117] "RemoveContainer" containerID="638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5" Mar 19 09:48:39.645844 master-0 kubenswrapper[27819]: E0319 09:48:39.645801 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5\": container with ID starting with 638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5 not found: ID does not exist" containerID="638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5" Mar 19 09:48:39.645844 master-0 kubenswrapper[27819]: I0319 09:48:39.645833 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5"} err="failed to get container status \"638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5\": rpc error: code = NotFound desc = could not find container \"638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5\": container with ID starting with 638feb280a9099a7db1523c3f5d74a636b08b42d54884a623edf064528214db5 not found: ID does not exist" Mar 19 09:48:39.645960 master-0 kubenswrapper[27819]: I0319 09:48:39.645854 27819 scope.go:117] "RemoveContainer" containerID="1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496" Mar 19 09:48:39.649657 master-0 kubenswrapper[27819]: E0319 09:48:39.649619 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496\": container with ID starting with 1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496 not found: ID does not exist" containerID="1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496" Mar 19 09:48:39.649657 master-0 kubenswrapper[27819]: I0319 09:48:39.649652 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496"} err="failed to get container status \"1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496\": rpc error: code = NotFound desc = could not find container \"1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496\": container with ID starting with 1f24683cdcb3b21bbad603440db4afd80c02e43315296e39aa662c9451fbd496 not found: ID does not exist" Mar 19 09:48:41.176489 master-0 kubenswrapper[27819]: I0319 09:48:41.176429 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jpnfs"] Mar 19 09:48:41.177024 master-0 kubenswrapper[27819]: E0319 09:48:41.176960 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5" containerName="mariadb-database-create" Mar 19 09:48:41.177024 master-0 kubenswrapper[27819]: I0319 09:48:41.176975 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5" containerName="mariadb-database-create" Mar 19 09:48:41.177024 master-0 kubenswrapper[27819]: E0319 09:48:41.176999 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerName="dnsmasq-dns" Mar 19 09:48:41.177024 master-0 kubenswrapper[27819]: I0319 09:48:41.177007 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerName="dnsmasq-dns" Mar 19 09:48:41.177024 master-0 kubenswrapper[27819]: E0319 09:48:41.177023 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a60df7e-6a63-4559-94ac-e793561673f1" containerName="mariadb-account-create-update" Mar 19 09:48:41.177172 master-0 kubenswrapper[27819]: I0319 09:48:41.177030 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a60df7e-6a63-4559-94ac-e793561673f1" containerName="mariadb-account-create-update" Mar 19 09:48:41.177172 master-0 kubenswrapper[27819]: E0319 09:48:41.177062 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48418558-addd-4db9-a62d-177832acc8db" containerName="mariadb-account-create-update" Mar 19 09:48:41.177172 master-0 kubenswrapper[27819]: I0319 09:48:41.177068 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="48418558-addd-4db9-a62d-177832acc8db" containerName="mariadb-account-create-update" Mar 19 09:48:41.177172 master-0 kubenswrapper[27819]: E0319 09:48:41.177083 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerName="init" Mar 19 09:48:41.177172 master-0 kubenswrapper[27819]: I0319 09:48:41.177090 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerName="init" Mar 19 09:48:41.177308 master-0 kubenswrapper[27819]: I0319 09:48:41.177273 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5" containerName="mariadb-database-create" Mar 19 09:48:41.177338 master-0 kubenswrapper[27819]: I0319 09:48:41.177308 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="073d89ae-4524-4fda-87d1-bdd81ef69236" containerName="dnsmasq-dns" Mar 19 09:48:41.177338 master-0 kubenswrapper[27819]: I0319 09:48:41.177320 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="48418558-addd-4db9-a62d-177832acc8db" containerName="mariadb-account-create-update" Mar 19 09:48:41.177338 master-0 kubenswrapper[27819]: I0319 09:48:41.177336 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a60df7e-6a63-4559-94ac-e793561673f1" containerName="mariadb-account-create-update" Mar 19 09:48:41.177995 master-0 kubenswrapper[27819]: I0319 09:48:41.177974 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.180193 master-0 kubenswrapper[27819]: I0319 09:48:41.179972 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-ae80b-config-data" Mar 19 09:48:41.187164 master-0 kubenswrapper[27819]: I0319 09:48:41.187119 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jpnfs"] Mar 19 09:48:41.284893 master-0 kubenswrapper[27819]: I0319 09:48:41.284827 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-db-sync-config-data\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.285218 master-0 kubenswrapper[27819]: I0319 09:48:41.284943 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjxl4\" (UniqueName: \"kubernetes.io/projected/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-kube-api-access-xjxl4\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.285218 master-0 kubenswrapper[27819]: I0319 09:48:41.284966 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-config-data\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.285218 master-0 kubenswrapper[27819]: I0319 09:48:41.285048 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-combined-ca-bundle\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.289906 master-0 kubenswrapper[27819]: I0319 09:48:41.289863 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073d89ae-4524-4fda-87d1-bdd81ef69236" path="/var/lib/kubelet/pods/073d89ae-4524-4fda-87d1-bdd81ef69236/volumes" Mar 19 09:48:41.386971 master-0 kubenswrapper[27819]: I0319 09:48:41.386919 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-combined-ca-bundle\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.387272 master-0 kubenswrapper[27819]: I0319 09:48:41.387254 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-db-sync-config-data\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.387463 master-0 kubenswrapper[27819]: I0319 09:48:41.387442 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjxl4\" (UniqueName: \"kubernetes.io/projected/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-kube-api-access-xjxl4\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.387636 master-0 kubenswrapper[27819]: I0319 09:48:41.387615 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-config-data\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.392388 master-0 kubenswrapper[27819]: I0319 09:48:41.392347 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-db-sync-config-data\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.392618 master-0 kubenswrapper[27819]: I0319 09:48:41.392586 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-combined-ca-bundle\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.392857 master-0 kubenswrapper[27819]: I0319 09:48:41.392808 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-config-data\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.408555 master-0 kubenswrapper[27819]: I0319 09:48:41.408492 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjxl4\" (UniqueName: \"kubernetes.io/projected/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-kube-api-access-xjxl4\") pod \"glance-db-sync-jpnfs\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:41.492615 master-0 kubenswrapper[27819]: I0319 09:48:41.492459 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpnfs" Mar 19 09:48:42.024355 master-0 kubenswrapper[27819]: I0319 09:48:42.024256 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 19 09:48:42.075862 master-0 kubenswrapper[27819]: I0319 09:48:42.073178 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jpnfs"] Mar 19 09:48:42.257424 master-0 kubenswrapper[27819]: I0319 09:48:42.255555 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8jlp5" podUID="0a614f48-076d-402e-8eec-10df235bb1b8" containerName="ovn-controller" probeResult="failure" output=< Mar 19 09:48:42.257424 master-0 kubenswrapper[27819]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 19 09:48:42.257424 master-0 kubenswrapper[27819]: > Mar 19 09:48:42.616928 master-0 kubenswrapper[27819]: I0319 09:48:42.616842 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpnfs" event={"ID":"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87","Type":"ContainerStarted","Data":"b2a1c602eea9e1c73adb1622535ccd1fb9245c1ff1b333d9403ce55ea26e559b"} Mar 19 09:48:43.633221 master-0 kubenswrapper[27819]: I0319 09:48:43.633164 27819 generic.go:334] "Generic (PLEG): container finished" podID="533235c2-501c-4388-beef-c19b3f33f733" containerID="155401e1ef0126aa2dd49b154e4f4e970cf6169ce19684bdfdd5557211f4b695" exitCode=0 Mar 19 09:48:43.633221 master-0 kubenswrapper[27819]: I0319 09:48:43.633229 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmw8s" event={"ID":"533235c2-501c-4388-beef-c19b3f33f733","Type":"ContainerDied","Data":"155401e1ef0126aa2dd49b154e4f4e970cf6169ce19684bdfdd5557211f4b695"} Mar 19 09:48:45.045438 master-0 kubenswrapper[27819]: I0319 09:48:45.045357 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.091163 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czq6x\" (UniqueName: \"kubernetes.io/projected/533235c2-501c-4388-beef-c19b3f33f733-kube-api-access-czq6x\") pod \"533235c2-501c-4388-beef-c19b3f33f733\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.091323 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-swiftconf\") pod \"533235c2-501c-4388-beef-c19b3f33f733\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.091478 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-ring-data-devices\") pod \"533235c2-501c-4388-beef-c19b3f33f733\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.091536 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/533235c2-501c-4388-beef-c19b3f33f733-etc-swift\") pod \"533235c2-501c-4388-beef-c19b3f33f733\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.091597 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-combined-ca-bundle\") pod \"533235c2-501c-4388-beef-c19b3f33f733\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.091726 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-scripts\") pod \"533235c2-501c-4388-beef-c19b3f33f733\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.091758 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-dispersionconf\") pod \"533235c2-501c-4388-beef-c19b3f33f733\" (UID: \"533235c2-501c-4388-beef-c19b3f33f733\") " Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.092125 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "533235c2-501c-4388-beef-c19b3f33f733" (UID: "533235c2-501c-4388-beef-c19b3f33f733"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.092765 27819 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.095078 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533235c2-501c-4388-beef-c19b3f33f733-kube-api-access-czq6x" (OuterVolumeSpecName: "kube-api-access-czq6x") pod "533235c2-501c-4388-beef-c19b3f33f733" (UID: "533235c2-501c-4388-beef-c19b3f33f733"). InnerVolumeSpecName "kube-api-access-czq6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:45.097777 master-0 kubenswrapper[27819]: I0319 09:48:45.096751 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533235c2-501c-4388-beef-c19b3f33f733-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "533235c2-501c-4388-beef-c19b3f33f733" (UID: "533235c2-501c-4388-beef-c19b3f33f733"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:48:45.100705 master-0 kubenswrapper[27819]: I0319 09:48:45.100646 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "533235c2-501c-4388-beef-c19b3f33f733" (UID: "533235c2-501c-4388-beef-c19b3f33f733"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:48:45.119073 master-0 kubenswrapper[27819]: I0319 09:48:45.118988 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-scripts" (OuterVolumeSpecName: "scripts") pod "533235c2-501c-4388-beef-c19b3f33f733" (UID: "533235c2-501c-4388-beef-c19b3f33f733"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:45.122546 master-0 kubenswrapper[27819]: I0319 09:48:45.122497 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "533235c2-501c-4388-beef-c19b3f33f733" (UID: "533235c2-501c-4388-beef-c19b3f33f733"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:48:45.127081 master-0 kubenswrapper[27819]: I0319 09:48:45.126997 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "533235c2-501c-4388-beef-c19b3f33f733" (UID: "533235c2-501c-4388-beef-c19b3f33f733"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:48:45.200216 master-0 kubenswrapper[27819]: I0319 09:48:45.200093 27819 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:45.200216 master-0 kubenswrapper[27819]: I0319 09:48:45.200143 27819 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/533235c2-501c-4388-beef-c19b3f33f733-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:45.200216 master-0 kubenswrapper[27819]: I0319 09:48:45.200155 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:45.200216 master-0 kubenswrapper[27819]: I0319 09:48:45.200166 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/533235c2-501c-4388-beef-c19b3f33f733-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:45.200216 master-0 kubenswrapper[27819]: I0319 09:48:45.200177 27819 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/533235c2-501c-4388-beef-c19b3f33f733-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:45.200216 master-0 kubenswrapper[27819]: I0319 09:48:45.200187 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czq6x\" (UniqueName: \"kubernetes.io/projected/533235c2-501c-4388-beef-c19b3f33f733-kube-api-access-czq6x\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:45.324985 master-0 kubenswrapper[27819]: I0319 09:48:45.324942 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tb9n2"] Mar 19 09:48:45.336557 master-0 kubenswrapper[27819]: I0319 09:48:45.336480 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tb9n2"] Mar 19 09:48:45.660810 master-0 kubenswrapper[27819]: I0319 09:48:45.660743 27819 generic.go:334] "Generic (PLEG): container finished" podID="3963c46c-0e6f-4a21-9719-469a187d3100" containerID="c9daed927127e007d399472d2e6693b92d54bd1b477358d96e2cceeedb3cf1d4" exitCode=0 Mar 19 09:48:45.660810 master-0 kubenswrapper[27819]: I0319 09:48:45.660782 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3963c46c-0e6f-4a21-9719-469a187d3100","Type":"ContainerDied","Data":"c9daed927127e007d399472d2e6693b92d54bd1b477358d96e2cceeedb3cf1d4"} Mar 19 09:48:45.663329 master-0 kubenswrapper[27819]: I0319 09:48:45.663275 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmw8s" event={"ID":"533235c2-501c-4388-beef-c19b3f33f733","Type":"ContainerDied","Data":"f238fb1aabcf73d0bb7809ad8d7ba4b97b476cccc8bb47defb9ffc7e998c0551"} Mar 19 09:48:45.663398 master-0 kubenswrapper[27819]: I0319 09:48:45.663330 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f238fb1aabcf73d0bb7809ad8d7ba4b97b476cccc8bb47defb9ffc7e998c0551" Mar 19 09:48:45.663398 master-0 kubenswrapper[27819]: I0319 09:48:45.663396 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmw8s" Mar 19 09:48:45.671240 master-0 kubenswrapper[27819]: I0319 09:48:45.671109 27819 generic.go:334] "Generic (PLEG): container finished" podID="ef67f907-aead-43e3-aa5f-3a4f7887cf9c" containerID="b8199541c61b63cfcca19db0d98e51dcad98179d82985a5ed651b4f8160583c3" exitCode=0 Mar 19 09:48:45.671458 master-0 kubenswrapper[27819]: I0319 09:48:45.671165 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef67f907-aead-43e3-aa5f-3a4f7887cf9c","Type":"ContainerDied","Data":"b8199541c61b63cfcca19db0d98e51dcad98179d82985a5ed651b4f8160583c3"} Mar 19 09:48:45.924717 master-0 kubenswrapper[27819]: I0319 09:48:45.924577 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:45.929476 master-0 kubenswrapper[27819]: I0319 09:48:45.929440 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/53eef9d1-14df-45aa-ae9b-bc7583066d10-etc-swift\") pod \"swift-storage-0\" (UID: \"53eef9d1-14df-45aa-ae9b-bc7583066d10\") " pod="openstack/swift-storage-0" Mar 19 09:48:46.119833 master-0 kubenswrapper[27819]: I0319 09:48:46.119769 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 19 09:48:46.596683 master-0 kubenswrapper[27819]: I0319 09:48:46.596621 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 19 09:48:46.602506 master-0 kubenswrapper[27819]: W0319 09:48:46.602424 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53eef9d1_14df_45aa_ae9b_bc7583066d10.slice/crio-e1c870ac67ef03226814d76670385c8c7d48b260c4ae0e331bf10aa96494eb0a WatchSource:0}: Error finding container e1c870ac67ef03226814d76670385c8c7d48b260c4ae0e331bf10aa96494eb0a: Status 404 returned error can't find the container with id e1c870ac67ef03226814d76670385c8c7d48b260c4ae0e331bf10aa96494eb0a Mar 19 09:48:46.683475 master-0 kubenswrapper[27819]: I0319 09:48:46.683419 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3963c46c-0e6f-4a21-9719-469a187d3100","Type":"ContainerStarted","Data":"56388e67c5fbcdfe34b81fe5a214b61d69b55c733996eb5ecce57ca09ea01d2b"} Mar 19 09:48:46.683730 master-0 kubenswrapper[27819]: I0319 09:48:46.683697 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 19 09:48:46.685965 master-0 kubenswrapper[27819]: I0319 09:48:46.685921 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"e1c870ac67ef03226814d76670385c8c7d48b260c4ae0e331bf10aa96494eb0a"} Mar 19 09:48:46.688127 master-0 kubenswrapper[27819]: I0319 09:48:46.688077 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ef67f907-aead-43e3-aa5f-3a4f7887cf9c","Type":"ContainerStarted","Data":"9c1769053f619a4057992c9a07f87586c6fa39e38925d643ecdfaddb139f968c"} Mar 19 09:48:46.688301 master-0 kubenswrapper[27819]: I0319 09:48:46.688274 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:48:46.712310 master-0 kubenswrapper[27819]: I0319 09:48:46.712230 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=59.492136825 podStartE2EDuration="1m5.712209692s" podCreationTimestamp="2026-03-19 09:47:41 +0000 UTC" firstStartedPulling="2026-03-19 09:47:59.047755098 +0000 UTC m=+863.969332790" lastFinishedPulling="2026-03-19 09:48:05.267827965 +0000 UTC m=+870.189405657" observedRunningTime="2026-03-19 09:48:46.710282629 +0000 UTC m=+911.631860321" watchObservedRunningTime="2026-03-19 09:48:46.712209692 +0000 UTC m=+911.633787394" Mar 19 09:48:46.746888 master-0 kubenswrapper[27819]: I0319 09:48:46.746495 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=62.023578641 podStartE2EDuration="1m6.746473407s" podCreationTimestamp="2026-03-19 09:47:40 +0000 UTC" firstStartedPulling="2026-03-19 09:48:00.545587027 +0000 UTC m=+865.467164719" lastFinishedPulling="2026-03-19 09:48:05.268481793 +0000 UTC m=+870.190059485" observedRunningTime="2026-03-19 09:48:46.743165607 +0000 UTC m=+911.664743329" watchObservedRunningTime="2026-03-19 09:48:46.746473407 +0000 UTC m=+911.668051099" Mar 19 09:48:47.266441 master-0 kubenswrapper[27819]: I0319 09:48:47.266375 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8jlp5" podUID="0a614f48-076d-402e-8eec-10df235bb1b8" containerName="ovn-controller" probeResult="failure" output=< Mar 19 09:48:47.266441 master-0 kubenswrapper[27819]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 19 09:48:47.266441 master-0 kubenswrapper[27819]: > Mar 19 09:48:47.295007 master-0 kubenswrapper[27819]: I0319 09:48:47.294927 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a60df7e-6a63-4559-94ac-e793561673f1" path="/var/lib/kubelet/pods/5a60df7e-6a63-4559-94ac-e793561673f1/volumes" Mar 19 09:48:47.353718 master-0 kubenswrapper[27819]: I0319 09:48:47.353653 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:48:47.357392 master-0 kubenswrapper[27819]: I0319 09:48:47.357345 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-kgb6x" Mar 19 09:48:47.616357 master-0 kubenswrapper[27819]: I0319 09:48:47.616301 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8jlp5-config-h2br8"] Mar 19 09:48:47.617897 master-0 kubenswrapper[27819]: E0319 09:48:47.617849 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533235c2-501c-4388-beef-c19b3f33f733" containerName="swift-ring-rebalance" Mar 19 09:48:47.617897 master-0 kubenswrapper[27819]: I0319 09:48:47.617890 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="533235c2-501c-4388-beef-c19b3f33f733" containerName="swift-ring-rebalance" Mar 19 09:48:47.618669 master-0 kubenswrapper[27819]: I0319 09:48:47.618608 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="533235c2-501c-4388-beef-c19b3f33f733" containerName="swift-ring-rebalance" Mar 19 09:48:47.621437 master-0 kubenswrapper[27819]: I0319 09:48:47.621407 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.624887 master-0 kubenswrapper[27819]: I0319 09:48:47.624856 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 19 09:48:47.665344 master-0 kubenswrapper[27819]: I0319 09:48:47.665221 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jlp5-config-h2br8"] Mar 19 09:48:47.667874 master-0 kubenswrapper[27819]: I0319 09:48:47.667816 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run-ovn\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.667978 master-0 kubenswrapper[27819]: I0319 09:48:47.667906 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-log-ovn\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.668026 master-0 kubenswrapper[27819]: I0319 09:48:47.668008 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-additional-scripts\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.668110 master-0 kubenswrapper[27819]: I0319 09:48:47.668043 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.668110 master-0 kubenswrapper[27819]: I0319 09:48:47.668068 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcx85\" (UniqueName: \"kubernetes.io/projected/06a63dbb-f658-4205-8989-669c50880c14-kube-api-access-zcx85\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.668110 master-0 kubenswrapper[27819]: I0319 09:48:47.668098 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-scripts\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.769750 master-0 kubenswrapper[27819]: I0319 09:48:47.769705 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-additional-scripts\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.770037 master-0 kubenswrapper[27819]: I0319 09:48:47.770021 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.770168 master-0 kubenswrapper[27819]: I0319 09:48:47.770152 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcx85\" (UniqueName: \"kubernetes.io/projected/06a63dbb-f658-4205-8989-669c50880c14-kube-api-access-zcx85\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.771052 master-0 kubenswrapper[27819]: I0319 09:48:47.771036 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-scripts\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.771698 master-0 kubenswrapper[27819]: I0319 09:48:47.770426 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-additional-scripts\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.771790 master-0 kubenswrapper[27819]: I0319 09:48:47.770366 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.771946 master-0 kubenswrapper[27819]: I0319 09:48:47.771929 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run-ovn\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.772099 master-0 kubenswrapper[27819]: I0319 09:48:47.772085 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-log-ovn\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.772537 master-0 kubenswrapper[27819]: I0319 09:48:47.772520 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-log-ovn\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.773014 master-0 kubenswrapper[27819]: I0319 09:48:47.772998 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run-ovn\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:47.774155 master-0 kubenswrapper[27819]: I0319 09:48:47.774115 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-scripts\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:48.415331 master-0 kubenswrapper[27819]: I0319 09:48:48.414159 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcx85\" (UniqueName: \"kubernetes.io/projected/06a63dbb-f658-4205-8989-669c50880c14-kube-api-access-zcx85\") pod \"ovn-controller-8jlp5-config-h2br8\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:48.556099 master-0 kubenswrapper[27819]: I0319 09:48:48.556030 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:50.373698 master-0 kubenswrapper[27819]: I0319 09:48:50.372955 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-j2pq2"] Mar 19 09:48:50.374626 master-0 kubenswrapper[27819]: I0319 09:48:50.374593 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:50.381220 master-0 kubenswrapper[27819]: I0319 09:48:50.381187 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 19 09:48:50.419908 master-0 kubenswrapper[27819]: I0319 09:48:50.419345 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j2pq2"] Mar 19 09:48:50.536715 master-0 kubenswrapper[27819]: I0319 09:48:50.536640 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrd4\" (UniqueName: \"kubernetes.io/projected/269819a3-da02-4fe9-b75d-558dad4d418a-kube-api-access-rbrd4\") pod \"root-account-create-update-j2pq2\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:50.536952 master-0 kubenswrapper[27819]: I0319 09:48:50.536751 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269819a3-da02-4fe9-b75d-558dad4d418a-operator-scripts\") pod \"root-account-create-update-j2pq2\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:50.642087 master-0 kubenswrapper[27819]: I0319 09:48:50.638245 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrd4\" (UniqueName: \"kubernetes.io/projected/269819a3-da02-4fe9-b75d-558dad4d418a-kube-api-access-rbrd4\") pod \"root-account-create-update-j2pq2\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:50.642087 master-0 kubenswrapper[27819]: I0319 09:48:50.638358 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269819a3-da02-4fe9-b75d-558dad4d418a-operator-scripts\") pod \"root-account-create-update-j2pq2\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:50.642087 master-0 kubenswrapper[27819]: I0319 09:48:50.639211 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269819a3-da02-4fe9-b75d-558dad4d418a-operator-scripts\") pod \"root-account-create-update-j2pq2\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:50.655260 master-0 kubenswrapper[27819]: I0319 09:48:50.655217 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrd4\" (UniqueName: \"kubernetes.io/projected/269819a3-da02-4fe9-b75d-558dad4d418a-kube-api-access-rbrd4\") pod \"root-account-create-update-j2pq2\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:50.723130 master-0 kubenswrapper[27819]: I0319 09:48:50.723051 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:52.258462 master-0 kubenswrapper[27819]: I0319 09:48:52.258372 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8jlp5" podUID="0a614f48-076d-402e-8eec-10df235bb1b8" containerName="ovn-controller" probeResult="failure" output=< Mar 19 09:48:52.258462 master-0 kubenswrapper[27819]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 19 09:48:52.258462 master-0 kubenswrapper[27819]: > Mar 19 09:48:56.651610 master-0 kubenswrapper[27819]: I0319 09:48:56.650644 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8jlp5-config-h2br8"] Mar 19 09:48:56.797093 master-0 kubenswrapper[27819]: I0319 09:48:56.797021 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-j2pq2"] Mar 19 09:48:56.830477 master-0 kubenswrapper[27819]: I0319 09:48:56.830425 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jlp5-config-h2br8" event={"ID":"06a63dbb-f658-4205-8989-669c50880c14","Type":"ContainerStarted","Data":"81f9ed2e6d55f1a5f7cb890478a375511ec5cb1c5103becd22aeb48e11068ce0"} Mar 19 09:48:56.833617 master-0 kubenswrapper[27819]: I0319 09:48:56.833568 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"819c546c22dfb6693f7a0871d6ca357a235316b47dfef94f808d33fd96e0b5a4"} Mar 19 09:48:56.833744 master-0 kubenswrapper[27819]: I0319 09:48:56.833625 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"4a811d5bff0c3aca619f4ac466b21350c189c75bc384a11203932ea3aecfbd36"} Mar 19 09:48:56.834672 master-0 kubenswrapper[27819]: I0319 09:48:56.834619 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j2pq2" event={"ID":"269819a3-da02-4fe9-b75d-558dad4d418a","Type":"ContainerStarted","Data":"49dc7187e67f287796e998df489c64fbde9f3d16098ad411b7733173bf9085a5"} Mar 19 09:48:57.051315 master-0 kubenswrapper[27819]: I0319 09:48:57.051262 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 19 09:48:57.260630 master-0 kubenswrapper[27819]: I0319 09:48:57.257942 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8jlp5" Mar 19 09:48:57.848492 master-0 kubenswrapper[27819]: I0319 09:48:57.848438 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j2pq2" event={"ID":"269819a3-da02-4fe9-b75d-558dad4d418a","Type":"ContainerDied","Data":"f19dd365c3b67193f902e88c09f03cd802761549990916afe866396744a9a930"} Mar 19 09:48:57.848492 master-0 kubenswrapper[27819]: I0319 09:48:57.848367 27819 generic.go:334] "Generic (PLEG): container finished" podID="269819a3-da02-4fe9-b75d-558dad4d418a" containerID="f19dd365c3b67193f902e88c09f03cd802761549990916afe866396744a9a930" exitCode=0 Mar 19 09:48:57.852891 master-0 kubenswrapper[27819]: I0319 09:48:57.852685 27819 generic.go:334] "Generic (PLEG): container finished" podID="06a63dbb-f658-4205-8989-669c50880c14" containerID="38e1c327f4f7ba04728e101a64a7122348c211af28a3706940a8e1d55b57a10a" exitCode=0 Mar 19 09:48:57.852891 master-0 kubenswrapper[27819]: I0319 09:48:57.852781 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jlp5-config-h2br8" event={"ID":"06a63dbb-f658-4205-8989-669c50880c14","Type":"ContainerDied","Data":"38e1c327f4f7ba04728e101a64a7122348c211af28a3706940a8e1d55b57a10a"} Mar 19 09:48:57.860157 master-0 kubenswrapper[27819]: I0319 09:48:57.860037 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"91dbcbd09831850030e356309d2f38cf07e19d5d3b0cd1aabc205d4440dbc8de"} Mar 19 09:48:57.860157 master-0 kubenswrapper[27819]: I0319 09:48:57.860115 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"d06ce8382fa818d82310f05b418557673c106869f9536e70f7d7eaae44f10942"} Mar 19 09:48:57.862522 master-0 kubenswrapper[27819]: I0319 09:48:57.862056 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpnfs" event={"ID":"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87","Type":"ContainerStarted","Data":"7bae05fb3383244813acfdfc734e22bbe682bdf7ca0d5badfc1f2e4831980e18"} Mar 19 09:48:58.117310 master-0 kubenswrapper[27819]: I0319 09:48:58.115384 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jpnfs" podStartSLOduration=2.9380119799999997 podStartE2EDuration="17.115355394s" podCreationTimestamp="2026-03-19 09:48:41 +0000 UTC" firstStartedPulling="2026-03-19 09:48:42.087768553 +0000 UTC m=+907.009346245" lastFinishedPulling="2026-03-19 09:48:56.265111967 +0000 UTC m=+921.186689659" observedRunningTime="2026-03-19 09:48:58.108094395 +0000 UTC m=+923.029672097" watchObservedRunningTime="2026-03-19 09:48:58.115355394 +0000 UTC m=+923.036933086" Mar 19 09:48:58.513010 master-0 kubenswrapper[27819]: I0319 09:48:58.510479 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 19 09:48:59.344902 master-0 kubenswrapper[27819]: I0319 09:48:59.344864 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:59.438385 master-0 kubenswrapper[27819]: I0319 09:48:59.437746 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:48:59.529426 master-0 kubenswrapper[27819]: I0319 09:48:59.529277 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269819a3-da02-4fe9-b75d-558dad4d418a-operator-scripts\") pod \"269819a3-da02-4fe9-b75d-558dad4d418a\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " Mar 19 09:48:59.529426 master-0 kubenswrapper[27819]: I0319 09:48:59.529363 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run-ovn\") pod \"06a63dbb-f658-4205-8989-669c50880c14\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " Mar 19 09:48:59.529773 master-0 kubenswrapper[27819]: I0319 09:48:59.529531 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbrd4\" (UniqueName: \"kubernetes.io/projected/269819a3-da02-4fe9-b75d-558dad4d418a-kube-api-access-rbrd4\") pod \"269819a3-da02-4fe9-b75d-558dad4d418a\" (UID: \"269819a3-da02-4fe9-b75d-558dad4d418a\") " Mar 19 09:48:59.529773 master-0 kubenswrapper[27819]: I0319 09:48:59.529612 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run\") pod \"06a63dbb-f658-4205-8989-669c50880c14\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " Mar 19 09:48:59.529969 master-0 kubenswrapper[27819]: I0319 09:48:59.529915 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269819a3-da02-4fe9-b75d-558dad4d418a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "269819a3-da02-4fe9-b75d-558dad4d418a" (UID: "269819a3-da02-4fe9-b75d-558dad4d418a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:59.530043 master-0 kubenswrapper[27819]: I0319 09:48:59.530029 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run" (OuterVolumeSpecName: "var-run") pod "06a63dbb-f658-4205-8989-669c50880c14" (UID: "06a63dbb-f658-4205-8989-669c50880c14"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:48:59.530106 master-0 kubenswrapper[27819]: I0319 09:48:59.530074 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "06a63dbb-f658-4205-8989-669c50880c14" (UID: "06a63dbb-f658-4205-8989-669c50880c14"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:48:59.530239 master-0 kubenswrapper[27819]: I0319 09:48:59.530202 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269819a3-da02-4fe9-b75d-558dad4d418a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.530239 master-0 kubenswrapper[27819]: I0319 09:48:59.530234 27819 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.530359 master-0 kubenswrapper[27819]: I0319 09:48:59.530251 27819 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-run\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.535461 master-0 kubenswrapper[27819]: I0319 09:48:59.533722 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269819a3-da02-4fe9-b75d-558dad4d418a-kube-api-access-rbrd4" (OuterVolumeSpecName: "kube-api-access-rbrd4") pod "269819a3-da02-4fe9-b75d-558dad4d418a" (UID: "269819a3-da02-4fe9-b75d-558dad4d418a"). InnerVolumeSpecName "kube-api-access-rbrd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:59.631152 master-0 kubenswrapper[27819]: I0319 09:48:59.630956 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcx85\" (UniqueName: \"kubernetes.io/projected/06a63dbb-f658-4205-8989-669c50880c14-kube-api-access-zcx85\") pod \"06a63dbb-f658-4205-8989-669c50880c14\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " Mar 19 09:48:59.631152 master-0 kubenswrapper[27819]: I0319 09:48:59.631028 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-scripts\") pod \"06a63dbb-f658-4205-8989-669c50880c14\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " Mar 19 09:48:59.631152 master-0 kubenswrapper[27819]: I0319 09:48:59.631079 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-log-ovn\") pod \"06a63dbb-f658-4205-8989-669c50880c14\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " Mar 19 09:48:59.631152 master-0 kubenswrapper[27819]: I0319 09:48:59.631123 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-additional-scripts\") pod \"06a63dbb-f658-4205-8989-669c50880c14\" (UID: \"06a63dbb-f658-4205-8989-669c50880c14\") " Mar 19 09:48:59.637025 master-0 kubenswrapper[27819]: I0319 09:48:59.631863 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbrd4\" (UniqueName: \"kubernetes.io/projected/269819a3-da02-4fe9-b75d-558dad4d418a-kube-api-access-rbrd4\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.637025 master-0 kubenswrapper[27819]: I0319 09:48:59.632821 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-scripts" (OuterVolumeSpecName: "scripts") pod "06a63dbb-f658-4205-8989-669c50880c14" (UID: "06a63dbb-f658-4205-8989-669c50880c14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:59.637025 master-0 kubenswrapper[27819]: I0319 09:48:59.632972 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "06a63dbb-f658-4205-8989-669c50880c14" (UID: "06a63dbb-f658-4205-8989-669c50880c14"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:48:59.637025 master-0 kubenswrapper[27819]: I0319 09:48:59.635851 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "06a63dbb-f658-4205-8989-669c50880c14" (UID: "06a63dbb-f658-4205-8989-669c50880c14"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:48:59.639529 master-0 kubenswrapper[27819]: I0319 09:48:59.639481 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a63dbb-f658-4205-8989-669c50880c14-kube-api-access-zcx85" (OuterVolumeSpecName: "kube-api-access-zcx85") pod "06a63dbb-f658-4205-8989-669c50880c14" (UID: "06a63dbb-f658-4205-8989-669c50880c14"). InnerVolumeSpecName "kube-api-access-zcx85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:48:59.735245 master-0 kubenswrapper[27819]: I0319 09:48:59.735134 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcx85\" (UniqueName: \"kubernetes.io/projected/06a63dbb-f658-4205-8989-669c50880c14-kube-api-access-zcx85\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.735245 master-0 kubenswrapper[27819]: I0319 09:48:59.735177 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.735245 master-0 kubenswrapper[27819]: I0319 09:48:59.735188 27819 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/06a63dbb-f658-4205-8989-669c50880c14-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.735245 master-0 kubenswrapper[27819]: I0319 09:48:59.735200 27819 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/06a63dbb-f658-4205-8989-669c50880c14-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:48:59.897064 master-0 kubenswrapper[27819]: I0319 09:48:59.897014 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qmknf"] Mar 19 09:48:59.897483 master-0 kubenswrapper[27819]: E0319 09:48:59.897462 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269819a3-da02-4fe9-b75d-558dad4d418a" containerName="mariadb-account-create-update" Mar 19 09:48:59.897483 master-0 kubenswrapper[27819]: I0319 09:48:59.897480 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="269819a3-da02-4fe9-b75d-558dad4d418a" containerName="mariadb-account-create-update" Mar 19 09:48:59.897589 master-0 kubenswrapper[27819]: E0319 09:48:59.897508 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a63dbb-f658-4205-8989-669c50880c14" containerName="ovn-config" Mar 19 09:48:59.897589 master-0 kubenswrapper[27819]: I0319 09:48:59.897515 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a63dbb-f658-4205-8989-669c50880c14" containerName="ovn-config" Mar 19 09:48:59.897853 master-0 kubenswrapper[27819]: I0319 09:48:59.897818 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="269819a3-da02-4fe9-b75d-558dad4d418a" containerName="mariadb-account-create-update" Mar 19 09:48:59.897890 master-0 kubenswrapper[27819]: I0319 09:48:59.897876 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a63dbb-f658-4205-8989-669c50880c14" containerName="ovn-config" Mar 19 09:48:59.898467 master-0 kubenswrapper[27819]: I0319 09:48:59.898441 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qmknf" Mar 19 09:48:59.929924 master-0 kubenswrapper[27819]: I0319 09:48:59.929843 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-j2pq2" event={"ID":"269819a3-da02-4fe9-b75d-558dad4d418a","Type":"ContainerDied","Data":"49dc7187e67f287796e998df489c64fbde9f3d16098ad411b7733173bf9085a5"} Mar 19 09:48:59.929924 master-0 kubenswrapper[27819]: I0319 09:48:59.929910 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49dc7187e67f287796e998df489c64fbde9f3d16098ad411b7733173bf9085a5" Mar 19 09:48:59.930251 master-0 kubenswrapper[27819]: I0319 09:48:59.930023 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-j2pq2" Mar 19 09:48:59.957764 master-0 kubenswrapper[27819]: I0319 09:48:59.957674 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ce1b-account-create-update-77p7q"] Mar 19 09:48:59.960376 master-0 kubenswrapper[27819]: I0319 09:48:59.959645 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:48:59.968067 master-0 kubenswrapper[27819]: I0319 09:48:59.967904 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 19 09:48:59.968808 master-0 kubenswrapper[27819]: I0319 09:48:59.968513 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8jlp5-config-h2br8" event={"ID":"06a63dbb-f658-4205-8989-669c50880c14","Type":"ContainerDied","Data":"81f9ed2e6d55f1a5f7cb890478a375511ec5cb1c5103becd22aeb48e11068ce0"} Mar 19 09:48:59.968808 master-0 kubenswrapper[27819]: I0319 09:48:59.968582 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81f9ed2e6d55f1a5f7cb890478a375511ec5cb1c5103becd22aeb48e11068ce0" Mar 19 09:48:59.968808 master-0 kubenswrapper[27819]: I0319 09:48:59.968751 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8jlp5-config-h2br8" Mar 19 09:49:00.000585 master-0 kubenswrapper[27819]: I0319 09:48:59.997873 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qmknf"] Mar 19 09:49:00.036873 master-0 kubenswrapper[27819]: I0319 09:49:00.036187 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"f168c2c6ea7713213c4a5036b47adec0eb990d978dbf3a3b1ea86c9c3dd75a17"} Mar 19 09:49:00.041288 master-0 kubenswrapper[27819]: I0319 09:49:00.040979 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5258q\" (UniqueName: \"kubernetes.io/projected/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-kube-api-access-5258q\") pod \"cinder-db-create-qmknf\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:00.041288 master-0 kubenswrapper[27819]: I0319 09:49:00.041087 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-operator-scripts\") pod \"cinder-db-create-qmknf\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:00.052573 master-0 kubenswrapper[27819]: I0319 09:49:00.050801 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ce1b-account-create-update-77p7q"] Mar 19 09:49:00.091092 master-0 kubenswrapper[27819]: I0319 09:49:00.091027 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9t78z"] Mar 19 09:49:00.092576 master-0 kubenswrapper[27819]: I0319 09:49:00.092518 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.101822 master-0 kubenswrapper[27819]: I0319 09:49:00.101518 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9t78z"] Mar 19 09:49:00.149699 master-0 kubenswrapper[27819]: I0319 09:49:00.147692 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5258q\" (UniqueName: \"kubernetes.io/projected/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-kube-api-access-5258q\") pod \"cinder-db-create-qmknf\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:00.149699 master-0 kubenswrapper[27819]: I0319 09:49:00.149113 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmr8\" (UniqueName: \"kubernetes.io/projected/3deae5fc-655a-4c5f-be1b-486fddcfc606-kube-api-access-jjmr8\") pod \"cinder-ce1b-account-create-update-77p7q\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:00.149699 master-0 kubenswrapper[27819]: I0319 09:49:00.149183 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3deae5fc-655a-4c5f-be1b-486fddcfc606-operator-scripts\") pod \"cinder-ce1b-account-create-update-77p7q\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:00.149699 master-0 kubenswrapper[27819]: I0319 09:49:00.149261 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-operator-scripts\") pod \"cinder-db-create-qmknf\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:00.151640 master-0 kubenswrapper[27819]: I0319 09:49:00.150103 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-operator-scripts\") pod \"cinder-db-create-qmknf\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:00.185943 master-0 kubenswrapper[27819]: I0319 09:49:00.185900 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5258q\" (UniqueName: \"kubernetes.io/projected/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-kube-api-access-5258q\") pod \"cinder-db-create-qmknf\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:00.218878 master-0 kubenswrapper[27819]: I0319 09:49:00.216993 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9abb-account-create-update-kjptp"] Mar 19 09:49:00.220061 master-0 kubenswrapper[27819]: I0319 09:49:00.219998 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.227983 master-0 kubenswrapper[27819]: I0319 09:49:00.227644 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 19 09:49:00.233968 master-0 kubenswrapper[27819]: I0319 09:49:00.233924 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9abb-account-create-update-kjptp"] Mar 19 09:49:00.251807 master-0 kubenswrapper[27819]: I0319 09:49:00.251467 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmqhs\" (UniqueName: \"kubernetes.io/projected/dea171b1-195d-427a-bbdc-80ac54af14bd-kube-api-access-mmqhs\") pod \"neutron-db-create-9t78z\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.251807 master-0 kubenswrapper[27819]: I0319 09:49:00.251576 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjmr8\" (UniqueName: \"kubernetes.io/projected/3deae5fc-655a-4c5f-be1b-486fddcfc606-kube-api-access-jjmr8\") pod \"cinder-ce1b-account-create-update-77p7q\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:00.251807 master-0 kubenswrapper[27819]: I0319 09:49:00.251623 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea171b1-195d-427a-bbdc-80ac54af14bd-operator-scripts\") pod \"neutron-db-create-9t78z\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.252655 master-0 kubenswrapper[27819]: I0319 09:49:00.252602 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3deae5fc-655a-4c5f-be1b-486fddcfc606-operator-scripts\") pod \"cinder-ce1b-account-create-update-77p7q\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:00.253644 master-0 kubenswrapper[27819]: I0319 09:49:00.253611 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3deae5fc-655a-4c5f-be1b-486fddcfc606-operator-scripts\") pod \"cinder-ce1b-account-create-update-77p7q\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:00.291184 master-0 kubenswrapper[27819]: I0319 09:49:00.291133 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjmr8\" (UniqueName: \"kubernetes.io/projected/3deae5fc-655a-4c5f-be1b-486fddcfc606-kube-api-access-jjmr8\") pod \"cinder-ce1b-account-create-update-77p7q\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:00.305917 master-0 kubenswrapper[27819]: I0319 09:49:00.305778 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4c4xb"] Mar 19 09:49:00.307464 master-0 kubenswrapper[27819]: I0319 09:49:00.307352 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.313936 master-0 kubenswrapper[27819]: I0319 09:49:00.313884 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 09:49:00.314112 master-0 kubenswrapper[27819]: I0319 09:49:00.314056 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 09:49:00.314190 master-0 kubenswrapper[27819]: I0319 09:49:00.314166 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 09:49:00.319680 master-0 kubenswrapper[27819]: I0319 09:49:00.319643 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:00.330749 master-0 kubenswrapper[27819]: I0319 09:49:00.330694 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4c4xb"] Mar 19 09:49:00.355133 master-0 kubenswrapper[27819]: I0319 09:49:00.355068 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-config-data\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.355824 master-0 kubenswrapper[27819]: I0319 09:49:00.355150 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-combined-ca-bundle\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.355824 master-0 kubenswrapper[27819]: I0319 09:49:00.355172 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89rwm\" (UniqueName: \"kubernetes.io/projected/affc6a41-5d03-4f11-9415-2d17fee716d6-kube-api-access-89rwm\") pod \"neutron-9abb-account-create-update-kjptp\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.355824 master-0 kubenswrapper[27819]: I0319 09:49:00.355199 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/affc6a41-5d03-4f11-9415-2d17fee716d6-operator-scripts\") pod \"neutron-9abb-account-create-update-kjptp\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.355824 master-0 kubenswrapper[27819]: I0319 09:49:00.355318 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcbpg\" (UniqueName: \"kubernetes.io/projected/206adece-9cbb-4c73-a7fa-b2ba36acbbee-kube-api-access-mcbpg\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.357252 master-0 kubenswrapper[27819]: I0319 09:49:00.357193 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmqhs\" (UniqueName: \"kubernetes.io/projected/dea171b1-195d-427a-bbdc-80ac54af14bd-kube-api-access-mmqhs\") pod \"neutron-db-create-9t78z\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.357343 master-0 kubenswrapper[27819]: I0319 09:49:00.357314 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea171b1-195d-427a-bbdc-80ac54af14bd-operator-scripts\") pod \"neutron-db-create-9t78z\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.357646 master-0 kubenswrapper[27819]: I0319 09:49:00.357614 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:00.360580 master-0 kubenswrapper[27819]: I0319 09:49:00.360363 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea171b1-195d-427a-bbdc-80ac54af14bd-operator-scripts\") pod \"neutron-db-create-9t78z\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.386882 master-0 kubenswrapper[27819]: I0319 09:49:00.386831 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmqhs\" (UniqueName: \"kubernetes.io/projected/dea171b1-195d-427a-bbdc-80ac54af14bd-kube-api-access-mmqhs\") pod \"neutron-db-create-9t78z\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.442080 master-0 kubenswrapper[27819]: I0319 09:49:00.442012 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:00.459292 master-0 kubenswrapper[27819]: I0319 09:49:00.459222 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcbpg\" (UniqueName: \"kubernetes.io/projected/206adece-9cbb-4c73-a7fa-b2ba36acbbee-kube-api-access-mcbpg\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.459524 master-0 kubenswrapper[27819]: I0319 09:49:00.459339 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-config-data\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.459524 master-0 kubenswrapper[27819]: I0319 09:49:00.459371 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-combined-ca-bundle\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.459524 master-0 kubenswrapper[27819]: I0319 09:49:00.459391 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89rwm\" (UniqueName: \"kubernetes.io/projected/affc6a41-5d03-4f11-9415-2d17fee716d6-kube-api-access-89rwm\") pod \"neutron-9abb-account-create-update-kjptp\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.459524 master-0 kubenswrapper[27819]: I0319 09:49:00.459423 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/affc6a41-5d03-4f11-9415-2d17fee716d6-operator-scripts\") pod \"neutron-9abb-account-create-update-kjptp\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.460508 master-0 kubenswrapper[27819]: I0319 09:49:00.460472 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/affc6a41-5d03-4f11-9415-2d17fee716d6-operator-scripts\") pod \"neutron-9abb-account-create-update-kjptp\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.464638 master-0 kubenswrapper[27819]: I0319 09:49:00.464597 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-config-data\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.470621 master-0 kubenswrapper[27819]: I0319 09:49:00.470195 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-combined-ca-bundle\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.492620 master-0 kubenswrapper[27819]: I0319 09:49:00.489970 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcbpg\" (UniqueName: \"kubernetes.io/projected/206adece-9cbb-4c73-a7fa-b2ba36acbbee-kube-api-access-mcbpg\") pod \"keystone-db-sync-4c4xb\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:00.498323 master-0 kubenswrapper[27819]: I0319 09:49:00.498268 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89rwm\" (UniqueName: \"kubernetes.io/projected/affc6a41-5d03-4f11-9415-2d17fee716d6-kube-api-access-89rwm\") pod \"neutron-9abb-account-create-update-kjptp\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.536752 master-0 kubenswrapper[27819]: I0319 09:49:00.536698 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:00.549828 master-0 kubenswrapper[27819]: I0319 09:49:00.548561 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:01.060747 master-0 kubenswrapper[27819]: I0319 09:49:01.060355 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"c4629b74e2875b685556c89b9344dfe7114ef44a6a05307d31588d52c8836d08"} Mar 19 09:49:01.060747 master-0 kubenswrapper[27819]: I0319 09:49:01.060412 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"19d1763c004333c6bad9b3fe68268b77ea59ebc010100b8608abe1fd782f2120"} Mar 19 09:49:01.675087 master-0 kubenswrapper[27819]: W0319 09:49:01.675038 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod881d4aba_28a0_45f1_b05f_c003b3b6b2ac.slice/crio-bdee1878e3469a71d28285d740f34283895bbced1b05f33dac1015701d2ffede WatchSource:0}: Error finding container bdee1878e3469a71d28285d740f34283895bbced1b05f33dac1015701d2ffede: Status 404 returned error can't find the container with id bdee1878e3469a71d28285d740f34283895bbced1b05f33dac1015701d2ffede Mar 19 09:49:01.697396 master-0 kubenswrapper[27819]: I0319 09:49:01.692966 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qmknf"] Mar 19 09:49:01.706682 master-0 kubenswrapper[27819]: I0319 09:49:01.706641 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ce1b-account-create-update-77p7q"] Mar 19 09:49:01.730529 master-0 kubenswrapper[27819]: W0319 09:49:01.729361 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod206adece_9cbb_4c73_a7fa_b2ba36acbbee.slice/crio-c0a87fdd2d523c91bc503c72336eae0ad49aa7fa618257ad2ceee1c709bad0eb WatchSource:0}: Error finding container c0a87fdd2d523c91bc503c72336eae0ad49aa7fa618257ad2ceee1c709bad0eb: Status 404 returned error can't find the container with id c0a87fdd2d523c91bc503c72336eae0ad49aa7fa618257ad2ceee1c709bad0eb Mar 19 09:49:01.737703 master-0 kubenswrapper[27819]: W0319 09:49:01.732361 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3deae5fc_655a_4c5f_be1b_486fddcfc606.slice/crio-6b75ff490e69293073139b6d9b523df1bddac27e6311b4b6de070764840c06f2 WatchSource:0}: Error finding container 6b75ff490e69293073139b6d9b523df1bddac27e6311b4b6de070764840c06f2: Status 404 returned error can't find the container with id 6b75ff490e69293073139b6d9b523df1bddac27e6311b4b6de070764840c06f2 Mar 19 09:49:01.737703 master-0 kubenswrapper[27819]: I0319 09:49:01.734106 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4c4xb"] Mar 19 09:49:01.737703 master-0 kubenswrapper[27819]: W0319 09:49:01.734635 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaffc6a41_5d03_4f11_9415_2d17fee716d6.slice/crio-758d6c67bc2a9358dac115d6ed0572f938d853aaecc78709c957dda241fa6f59 WatchSource:0}: Error finding container 758d6c67bc2a9358dac115d6ed0572f938d853aaecc78709c957dda241fa6f59: Status 404 returned error can't find the container with id 758d6c67bc2a9358dac115d6ed0572f938d853aaecc78709c957dda241fa6f59 Mar 19 09:49:01.758616 master-0 kubenswrapper[27819]: I0319 09:49:01.758577 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9abb-account-create-update-kjptp"] Mar 19 09:49:01.771768 master-0 kubenswrapper[27819]: I0319 09:49:01.770120 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9t78z"] Mar 19 09:49:01.823065 master-0 kubenswrapper[27819]: I0319 09:49:01.819906 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8jlp5-config-h2br8"] Mar 19 09:49:01.837658 master-0 kubenswrapper[27819]: I0319 09:49:01.837504 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8jlp5-config-h2br8"] Mar 19 09:49:02.117574 master-0 kubenswrapper[27819]: I0319 09:49:02.116462 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9t78z" event={"ID":"dea171b1-195d-427a-bbdc-80ac54af14bd","Type":"ContainerStarted","Data":"84a8c770f02d21d31639ccb76335b934f2e03e4842684737d8426c825bd16700"} Mar 19 09:49:02.117847 master-0 kubenswrapper[27819]: I0319 09:49:02.117810 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9abb-account-create-update-kjptp" event={"ID":"affc6a41-5d03-4f11-9415-2d17fee716d6","Type":"ContainerStarted","Data":"758d6c67bc2a9358dac115d6ed0572f938d853aaecc78709c957dda241fa6f59"} Mar 19 09:49:02.119375 master-0 kubenswrapper[27819]: I0319 09:49:02.119306 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ce1b-account-create-update-77p7q" event={"ID":"3deae5fc-655a-4c5f-be1b-486fddcfc606","Type":"ContainerStarted","Data":"6b75ff490e69293073139b6d9b523df1bddac27e6311b4b6de070764840c06f2"} Mar 19 09:49:02.126571 master-0 kubenswrapper[27819]: I0319 09:49:02.124950 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4c4xb" event={"ID":"206adece-9cbb-4c73-a7fa-b2ba36acbbee","Type":"ContainerStarted","Data":"c0a87fdd2d523c91bc503c72336eae0ad49aa7fa618257ad2ceee1c709bad0eb"} Mar 19 09:49:02.135526 master-0 kubenswrapper[27819]: I0319 09:49:02.134347 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qmknf" event={"ID":"881d4aba-28a0-45f1-b05f-c003b3b6b2ac","Type":"ContainerStarted","Data":"bdee1878e3469a71d28285d740f34283895bbced1b05f33dac1015701d2ffede"} Mar 19 09:49:02.153132 master-0 kubenswrapper[27819]: I0319 09:49:02.153073 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"648a2af2b627234bfcb77fab20a6147bbaa56d7f41a563c97575635e03660274"} Mar 19 09:49:02.985183 master-0 kubenswrapper[27819]: E0319 09:49:02.985115 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaffc6a41_5d03_4f11_9415_2d17fee716d6.slice/crio-conmon-ab59f40d7873631d7e7fe0fbe85ee5f09c8af548517b6ec61865d1d210deda3c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaffc6a41_5d03_4f11_9415_2d17fee716d6.slice/crio-ab59f40d7873631d7e7fe0fbe85ee5f09c8af548517b6ec61865d1d210deda3c.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:49:03.180691 master-0 kubenswrapper[27819]: I0319 09:49:03.180586 27819 generic.go:334] "Generic (PLEG): container finished" podID="affc6a41-5d03-4f11-9415-2d17fee716d6" containerID="ab59f40d7873631d7e7fe0fbe85ee5f09c8af548517b6ec61865d1d210deda3c" exitCode=0 Mar 19 09:49:03.180691 master-0 kubenswrapper[27819]: I0319 09:49:03.180662 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9abb-account-create-update-kjptp" event={"ID":"affc6a41-5d03-4f11-9415-2d17fee716d6","Type":"ContainerDied","Data":"ab59f40d7873631d7e7fe0fbe85ee5f09c8af548517b6ec61865d1d210deda3c"} Mar 19 09:49:03.182878 master-0 kubenswrapper[27819]: I0319 09:49:03.182839 27819 generic.go:334] "Generic (PLEG): container finished" podID="3deae5fc-655a-4c5f-be1b-486fddcfc606" containerID="d72df5c1a63d9aca4b027a171f2c23a686016bafa78561cbfa5b33001f62f3f3" exitCode=0 Mar 19 09:49:03.183002 master-0 kubenswrapper[27819]: I0319 09:49:03.182882 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ce1b-account-create-update-77p7q" event={"ID":"3deae5fc-655a-4c5f-be1b-486fddcfc606","Type":"ContainerDied","Data":"d72df5c1a63d9aca4b027a171f2c23a686016bafa78561cbfa5b33001f62f3f3"} Mar 19 09:49:03.184834 master-0 kubenswrapper[27819]: I0319 09:49:03.184731 27819 generic.go:334] "Generic (PLEG): container finished" podID="881d4aba-28a0-45f1-b05f-c003b3b6b2ac" containerID="6be53d27e56de3d8167bd2e882ee9c8fb9e090b57b58a1916585c9f92803a529" exitCode=0 Mar 19 09:49:03.184959 master-0 kubenswrapper[27819]: I0319 09:49:03.184863 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qmknf" event={"ID":"881d4aba-28a0-45f1-b05f-c003b3b6b2ac","Type":"ContainerDied","Data":"6be53d27e56de3d8167bd2e882ee9c8fb9e090b57b58a1916585c9f92803a529"} Mar 19 09:49:03.186423 master-0 kubenswrapper[27819]: I0319 09:49:03.186392 27819 generic.go:334] "Generic (PLEG): container finished" podID="dea171b1-195d-427a-bbdc-80ac54af14bd" containerID="8daa0a2f6773947968d257246c818f6a753d54769d10d53dd3d9c27e5eef54a4" exitCode=0 Mar 19 09:49:03.186514 master-0 kubenswrapper[27819]: I0319 09:49:03.186429 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9t78z" event={"ID":"dea171b1-195d-427a-bbdc-80ac54af14bd","Type":"ContainerDied","Data":"8daa0a2f6773947968d257246c818f6a753d54769d10d53dd3d9c27e5eef54a4"} Mar 19 09:49:03.214553 master-0 kubenswrapper[27819]: I0319 09:49:03.214366 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qmknf" podStartSLOduration=4.214344889 podStartE2EDuration="4.214344889s" podCreationTimestamp="2026-03-19 09:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:02.202684403 +0000 UTC m=+927.124262095" watchObservedRunningTime="2026-03-19 09:49:03.214344889 +0000 UTC m=+928.135922581" Mar 19 09:49:03.297263 master-0 kubenswrapper[27819]: I0319 09:49:03.297144 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a63dbb-f658-4205-8989-669c50880c14" path="/var/lib/kubelet/pods/06a63dbb-f658-4205-8989-669c50880c14/volumes" Mar 19 09:49:07.117500 master-0 kubenswrapper[27819]: I0319 09:49:07.117437 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:07.127415 master-0 kubenswrapper[27819]: I0319 09:49:07.127303 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89rwm\" (UniqueName: \"kubernetes.io/projected/affc6a41-5d03-4f11-9415-2d17fee716d6-kube-api-access-89rwm\") pod \"affc6a41-5d03-4f11-9415-2d17fee716d6\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " Mar 19 09:49:07.127728 master-0 kubenswrapper[27819]: I0319 09:49:07.127534 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/affc6a41-5d03-4f11-9415-2d17fee716d6-operator-scripts\") pod \"affc6a41-5d03-4f11-9415-2d17fee716d6\" (UID: \"affc6a41-5d03-4f11-9415-2d17fee716d6\") " Mar 19 09:49:07.128981 master-0 kubenswrapper[27819]: I0319 09:49:07.128832 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/affc6a41-5d03-4f11-9415-2d17fee716d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "affc6a41-5d03-4f11-9415-2d17fee716d6" (UID: "affc6a41-5d03-4f11-9415-2d17fee716d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:07.130732 master-0 kubenswrapper[27819]: I0319 09:49:07.130622 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:07.131936 master-0 kubenswrapper[27819]: I0319 09:49:07.131910 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:07.132117 master-0 kubenswrapper[27819]: I0319 09:49:07.132092 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affc6a41-5d03-4f11-9415-2d17fee716d6-kube-api-access-89rwm" (OuterVolumeSpecName: "kube-api-access-89rwm") pod "affc6a41-5d03-4f11-9415-2d17fee716d6" (UID: "affc6a41-5d03-4f11-9415-2d17fee716d6"). InnerVolumeSpecName "kube-api-access-89rwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:07.176229 master-0 kubenswrapper[27819]: I0319 09:49:07.176189 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:07.230065 master-0 kubenswrapper[27819]: I0319 09:49:07.228932 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjmr8\" (UniqueName: \"kubernetes.io/projected/3deae5fc-655a-4c5f-be1b-486fddcfc606-kube-api-access-jjmr8\") pod \"3deae5fc-655a-4c5f-be1b-486fddcfc606\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " Mar 19 09:49:07.230065 master-0 kubenswrapper[27819]: I0319 09:49:07.229060 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-operator-scripts\") pod \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " Mar 19 09:49:07.230065 master-0 kubenswrapper[27819]: I0319 09:49:07.229108 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmqhs\" (UniqueName: \"kubernetes.io/projected/dea171b1-195d-427a-bbdc-80ac54af14bd-kube-api-access-mmqhs\") pod \"dea171b1-195d-427a-bbdc-80ac54af14bd\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " Mar 19 09:49:07.230065 master-0 kubenswrapper[27819]: I0319 09:49:07.229134 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea171b1-195d-427a-bbdc-80ac54af14bd-operator-scripts\") pod \"dea171b1-195d-427a-bbdc-80ac54af14bd\" (UID: \"dea171b1-195d-427a-bbdc-80ac54af14bd\") " Mar 19 09:49:07.230065 master-0 kubenswrapper[27819]: I0319 09:49:07.229278 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3deae5fc-655a-4c5f-be1b-486fddcfc606-operator-scripts\") pod \"3deae5fc-655a-4c5f-be1b-486fddcfc606\" (UID: \"3deae5fc-655a-4c5f-be1b-486fddcfc606\") " Mar 19 09:49:07.230065 master-0 kubenswrapper[27819]: I0319 09:49:07.229338 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5258q\" (UniqueName: \"kubernetes.io/projected/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-kube-api-access-5258q\") pod \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\" (UID: \"881d4aba-28a0-45f1-b05f-c003b3b6b2ac\") " Mar 19 09:49:07.230701 master-0 kubenswrapper[27819]: I0319 09:49:07.230604 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dea171b1-195d-427a-bbdc-80ac54af14bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dea171b1-195d-427a-bbdc-80ac54af14bd" (UID: "dea171b1-195d-427a-bbdc-80ac54af14bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:07.233683 master-0 kubenswrapper[27819]: I0319 09:49:07.231067 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3deae5fc-655a-4c5f-be1b-486fddcfc606-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3deae5fc-655a-4c5f-be1b-486fddcfc606" (UID: "3deae5fc-655a-4c5f-be1b-486fddcfc606"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:07.233683 master-0 kubenswrapper[27819]: I0319 09:49:07.231121 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "881d4aba-28a0-45f1-b05f-c003b3b6b2ac" (UID: "881d4aba-28a0-45f1-b05f-c003b3b6b2ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:07.233683 master-0 kubenswrapper[27819]: I0319 09:49:07.231731 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:07.233683 master-0 kubenswrapper[27819]: I0319 09:49:07.231748 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/affc6a41-5d03-4f11-9415-2d17fee716d6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:07.233683 master-0 kubenswrapper[27819]: I0319 09:49:07.231756 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dea171b1-195d-427a-bbdc-80ac54af14bd-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:07.233683 master-0 kubenswrapper[27819]: I0319 09:49:07.231765 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3deae5fc-655a-4c5f-be1b-486fddcfc606-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:07.233683 master-0 kubenswrapper[27819]: I0319 09:49:07.231775 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89rwm\" (UniqueName: \"kubernetes.io/projected/affc6a41-5d03-4f11-9415-2d17fee716d6-kube-api-access-89rwm\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:07.237425 master-0 kubenswrapper[27819]: I0319 09:49:07.237350 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea171b1-195d-427a-bbdc-80ac54af14bd-kube-api-access-mmqhs" (OuterVolumeSpecName: "kube-api-access-mmqhs") pod "dea171b1-195d-427a-bbdc-80ac54af14bd" (UID: "dea171b1-195d-427a-bbdc-80ac54af14bd"). InnerVolumeSpecName "kube-api-access-mmqhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:07.237425 master-0 kubenswrapper[27819]: I0319 09:49:07.237401 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3deae5fc-655a-4c5f-be1b-486fddcfc606-kube-api-access-jjmr8" (OuterVolumeSpecName: "kube-api-access-jjmr8") pod "3deae5fc-655a-4c5f-be1b-486fddcfc606" (UID: "3deae5fc-655a-4c5f-be1b-486fddcfc606"). InnerVolumeSpecName "kube-api-access-jjmr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:07.241587 master-0 kubenswrapper[27819]: I0319 09:49:07.239115 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-kube-api-access-5258q" (OuterVolumeSpecName: "kube-api-access-5258q") pod "881d4aba-28a0-45f1-b05f-c003b3b6b2ac" (UID: "881d4aba-28a0-45f1-b05f-c003b3b6b2ac"). InnerVolumeSpecName "kube-api-access-5258q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:07.247073 master-0 kubenswrapper[27819]: I0319 09:49:07.246431 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9abb-account-create-update-kjptp" Mar 19 09:49:07.247073 master-0 kubenswrapper[27819]: I0319 09:49:07.246775 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9abb-account-create-update-kjptp" event={"ID":"affc6a41-5d03-4f11-9415-2d17fee716d6","Type":"ContainerDied","Data":"758d6c67bc2a9358dac115d6ed0572f938d853aaecc78709c957dda241fa6f59"} Mar 19 09:49:07.247073 master-0 kubenswrapper[27819]: I0319 09:49:07.246805 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="758d6c67bc2a9358dac115d6ed0572f938d853aaecc78709c957dda241fa6f59" Mar 19 09:49:07.250648 master-0 kubenswrapper[27819]: I0319 09:49:07.248875 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ce1b-account-create-update-77p7q" event={"ID":"3deae5fc-655a-4c5f-be1b-486fddcfc606","Type":"ContainerDied","Data":"6b75ff490e69293073139b6d9b523df1bddac27e6311b4b6de070764840c06f2"} Mar 19 09:49:07.250648 master-0 kubenswrapper[27819]: I0319 09:49:07.248904 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b75ff490e69293073139b6d9b523df1bddac27e6311b4b6de070764840c06f2" Mar 19 09:49:07.250648 master-0 kubenswrapper[27819]: I0319 09:49:07.248951 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ce1b-account-create-update-77p7q" Mar 19 09:49:07.271426 master-0 kubenswrapper[27819]: I0319 09:49:07.271380 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qmknf" event={"ID":"881d4aba-28a0-45f1-b05f-c003b3b6b2ac","Type":"ContainerDied","Data":"bdee1878e3469a71d28285d740f34283895bbced1b05f33dac1015701d2ffede"} Mar 19 09:49:07.271426 master-0 kubenswrapper[27819]: I0319 09:49:07.271422 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdee1878e3469a71d28285d740f34283895bbced1b05f33dac1015701d2ffede" Mar 19 09:49:07.271665 master-0 kubenswrapper[27819]: I0319 09:49:07.271474 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qmknf" Mar 19 09:49:07.274031 master-0 kubenswrapper[27819]: I0319 09:49:07.273652 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9t78z" event={"ID":"dea171b1-195d-427a-bbdc-80ac54af14bd","Type":"ContainerDied","Data":"84a8c770f02d21d31639ccb76335b934f2e03e4842684737d8426c825bd16700"} Mar 19 09:49:07.274031 master-0 kubenswrapper[27819]: I0319 09:49:07.273676 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a8c770f02d21d31639ccb76335b934f2e03e4842684737d8426c825bd16700" Mar 19 09:49:07.274031 master-0 kubenswrapper[27819]: I0319 09:49:07.273660 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9t78z" Mar 19 09:49:07.275346 master-0 kubenswrapper[27819]: I0319 09:49:07.275293 27819 generic.go:334] "Generic (PLEG): container finished" podID="1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" containerID="7bae05fb3383244813acfdfc734e22bbe682bdf7ca0d5badfc1f2e4831980e18" exitCode=0 Mar 19 09:49:07.275453 master-0 kubenswrapper[27819]: I0319 09:49:07.275345 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpnfs" event={"ID":"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87","Type":"ContainerDied","Data":"7bae05fb3383244813acfdfc734e22bbe682bdf7ca0d5badfc1f2e4831980e18"} Mar 19 09:49:07.334541 master-0 kubenswrapper[27819]: I0319 09:49:07.334441 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjmr8\" (UniqueName: \"kubernetes.io/projected/3deae5fc-655a-4c5f-be1b-486fddcfc606-kube-api-access-jjmr8\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:07.334541 master-0 kubenswrapper[27819]: I0319 09:49:07.334507 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmqhs\" (UniqueName: \"kubernetes.io/projected/dea171b1-195d-427a-bbdc-80ac54af14bd-kube-api-access-mmqhs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:07.334541 master-0 kubenswrapper[27819]: I0319 09:49:07.334518 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5258q\" (UniqueName: \"kubernetes.io/projected/881d4aba-28a0-45f1-b05f-c003b3b6b2ac-kube-api-access-5258q\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:08.792508 master-0 kubenswrapper[27819]: I0319 09:49:08.792466 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpnfs" Mar 19 09:49:08.872662 master-0 kubenswrapper[27819]: I0319 09:49:08.872530 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-config-data\") pod \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " Mar 19 09:49:08.872662 master-0 kubenswrapper[27819]: I0319 09:49:08.872654 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjxl4\" (UniqueName: \"kubernetes.io/projected/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-kube-api-access-xjxl4\") pod \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " Mar 19 09:49:08.872925 master-0 kubenswrapper[27819]: I0319 09:49:08.872708 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-combined-ca-bundle\") pod \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " Mar 19 09:49:08.872925 master-0 kubenswrapper[27819]: I0319 09:49:08.872812 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-db-sync-config-data\") pod \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\" (UID: \"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87\") " Mar 19 09:49:08.877314 master-0 kubenswrapper[27819]: I0319 09:49:08.876964 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" (UID: "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:08.879904 master-0 kubenswrapper[27819]: I0319 09:49:08.879855 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-kube-api-access-xjxl4" (OuterVolumeSpecName: "kube-api-access-xjxl4") pod "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" (UID: "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87"). InnerVolumeSpecName "kube-api-access-xjxl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:08.927461 master-0 kubenswrapper[27819]: I0319 09:49:08.927410 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-config-data" (OuterVolumeSpecName: "config-data") pod "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" (UID: "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:08.929781 master-0 kubenswrapper[27819]: I0319 09:49:08.929711 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" (UID: "1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:08.975133 master-0 kubenswrapper[27819]: I0319 09:49:08.975082 27819 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:08.975133 master-0 kubenswrapper[27819]: I0319 09:49:08.975127 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:08.975133 master-0 kubenswrapper[27819]: I0319 09:49:08.975139 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjxl4\" (UniqueName: \"kubernetes.io/projected/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-kube-api-access-xjxl4\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:08.975394 master-0 kubenswrapper[27819]: I0319 09:49:08.975148 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:09.304114 master-0 kubenswrapper[27819]: I0319 09:49:09.304057 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jpnfs" Mar 19 09:49:09.304338 master-0 kubenswrapper[27819]: I0319 09:49:09.304027 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jpnfs" event={"ID":"1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87","Type":"ContainerDied","Data":"b2a1c602eea9e1c73adb1622535ccd1fb9245c1ff1b333d9403ce55ea26e559b"} Mar 19 09:49:09.304380 master-0 kubenswrapper[27819]: I0319 09:49:09.304329 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2a1c602eea9e1c73adb1622535ccd1fb9245c1ff1b333d9403ce55ea26e559b" Mar 19 09:49:09.306024 master-0 kubenswrapper[27819]: I0319 09:49:09.305728 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4c4xb" event={"ID":"206adece-9cbb-4c73-a7fa-b2ba36acbbee","Type":"ContainerStarted","Data":"cda346a18e3ae817a45f054731ddfc16d70398e0a32ef37a8200dfe237c6efb2"} Mar 19 09:49:09.323931 master-0 kubenswrapper[27819]: I0319 09:49:09.320967 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"1c39b38a447a75720c97e7a2c4805e34ad0de4963eb5e3a59fc9c9d0dfc0a2a2"} Mar 19 09:49:09.338565 master-0 kubenswrapper[27819]: I0319 09:49:09.338285 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4c4xb" podStartSLOduration=2.919539411 podStartE2EDuration="9.338259365s" podCreationTimestamp="2026-03-19 09:49:00 +0000 UTC" firstStartedPulling="2026-03-19 09:49:01.733851647 +0000 UTC m=+926.655429339" lastFinishedPulling="2026-03-19 09:49:08.152571601 +0000 UTC m=+933.074149293" observedRunningTime="2026-03-19 09:49:09.323197585 +0000 UTC m=+934.244775287" watchObservedRunningTime="2026-03-19 09:49:09.338259365 +0000 UTC m=+934.259837057" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.750791 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85df44b749-4nlmd"] Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: E0319 09:49:09.751209 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" containerName="glance-db-sync" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751224 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" containerName="glance-db-sync" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: E0319 09:49:09.751247 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dea171b1-195d-427a-bbdc-80ac54af14bd" containerName="mariadb-database-create" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751253 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea171b1-195d-427a-bbdc-80ac54af14bd" containerName="mariadb-database-create" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: E0319 09:49:09.751273 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deae5fc-655a-4c5f-be1b-486fddcfc606" containerName="mariadb-account-create-update" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751279 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deae5fc-655a-4c5f-be1b-486fddcfc606" containerName="mariadb-account-create-update" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: E0319 09:49:09.751289 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affc6a41-5d03-4f11-9415-2d17fee716d6" containerName="mariadb-account-create-update" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751295 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="affc6a41-5d03-4f11-9415-2d17fee716d6" containerName="mariadb-account-create-update" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: E0319 09:49:09.751332 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881d4aba-28a0-45f1-b05f-c003b3b6b2ac" containerName="mariadb-database-create" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751338 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="881d4aba-28a0-45f1-b05f-c003b3b6b2ac" containerName="mariadb-database-create" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751525 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="881d4aba-28a0-45f1-b05f-c003b3b6b2ac" containerName="mariadb-database-create" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751573 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="affc6a41-5d03-4f11-9415-2d17fee716d6" containerName="mariadb-account-create-update" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751590 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="3deae5fc-655a-4c5f-be1b-486fddcfc606" containerName="mariadb-account-create-update" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751601 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" containerName="glance-db-sync" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.751613 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="dea171b1-195d-427a-bbdc-80ac54af14bd" containerName="mariadb-database-create" Mar 19 09:49:09.771906 master-0 kubenswrapper[27819]: I0319 09:49:09.752814 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.785065 master-0 kubenswrapper[27819]: I0319 09:49:09.784979 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85df44b749-4nlmd"] Mar 19 09:49:09.804774 master-0 kubenswrapper[27819]: I0319 09:49:09.802338 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-sb\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.804774 master-0 kubenswrapper[27819]: I0319 09:49:09.802424 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-nb\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.804774 master-0 kubenswrapper[27819]: I0319 09:49:09.802445 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v29v\" (UniqueName: \"kubernetes.io/projected/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-kube-api-access-6v29v\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.804774 master-0 kubenswrapper[27819]: I0319 09:49:09.802463 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-dns-svc\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.804774 master-0 kubenswrapper[27819]: I0319 09:49:09.802676 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-config\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.905574 master-0 kubenswrapper[27819]: I0319 09:49:09.904855 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-sb\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.905574 master-0 kubenswrapper[27819]: I0319 09:49:09.904938 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-nb\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.905574 master-0 kubenswrapper[27819]: I0319 09:49:09.904963 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-dns-svc\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.905574 master-0 kubenswrapper[27819]: I0319 09:49:09.904986 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v29v\" (UniqueName: \"kubernetes.io/projected/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-kube-api-access-6v29v\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.905574 master-0 kubenswrapper[27819]: I0319 09:49:09.905100 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-config\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.908081 master-0 kubenswrapper[27819]: I0319 09:49:09.906028 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-config\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.908081 master-0 kubenswrapper[27819]: I0319 09:49:09.906343 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-dns-svc\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.908081 master-0 kubenswrapper[27819]: I0319 09:49:09.906400 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-sb\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.908081 master-0 kubenswrapper[27819]: I0319 09:49:09.906917 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-nb\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:09.931635 master-0 kubenswrapper[27819]: I0319 09:49:09.925342 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v29v\" (UniqueName: \"kubernetes.io/projected/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-kube-api-access-6v29v\") pod \"dnsmasq-dns-85df44b749-4nlmd\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:10.176814 master-0 kubenswrapper[27819]: I0319 09:49:10.176744 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:10.405800 master-0 kubenswrapper[27819]: I0319 09:49:10.405124 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"51c13e14167fca88dcc011b8b13684241fa275927be984591016b79b65b1cfd7"} Mar 19 09:49:10.405800 master-0 kubenswrapper[27819]: I0319 09:49:10.405174 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"3582a8c0ed4073a0f553a7ecac305bc0a6ee2179cc32e39a3743a4751796c20f"} Mar 19 09:49:10.405800 master-0 kubenswrapper[27819]: I0319 09:49:10.405185 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"a7eaf4863490a37d88eb2c8f7d1e2dff64a8fb73ca4d93516f4af588fbf60d54"} Mar 19 09:49:10.405800 master-0 kubenswrapper[27819]: I0319 09:49:10.405194 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"7c0e11293be66c27c7ab296a08da381ac177ab980661a478118058d958b00d7c"} Mar 19 09:49:10.884273 master-0 kubenswrapper[27819]: I0319 09:49:10.884199 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85df44b749-4nlmd"] Mar 19 09:49:10.884994 master-0 kubenswrapper[27819]: W0319 09:49:10.884956 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5cc4469_9b5f_4b86_b146_88d9d3d51f5b.slice/crio-54b70b667fc5bca34690e3a3975c531c470ae485f8daf224b634288280bc9ea1 WatchSource:0}: Error finding container 54b70b667fc5bca34690e3a3975c531c470ae485f8daf224b634288280bc9ea1: Status 404 returned error can't find the container with id 54b70b667fc5bca34690e3a3975c531c470ae485f8daf224b634288280bc9ea1 Mar 19 09:49:11.415346 master-0 kubenswrapper[27819]: I0319 09:49:11.415185 27819 generic.go:334] "Generic (PLEG): container finished" podID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerID="7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e" exitCode=0 Mar 19 09:49:11.415346 master-0 kubenswrapper[27819]: I0319 09:49:11.415275 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" event={"ID":"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b","Type":"ContainerDied","Data":"7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e"} Mar 19 09:49:11.415346 master-0 kubenswrapper[27819]: I0319 09:49:11.415310 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" event={"ID":"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b","Type":"ContainerStarted","Data":"54b70b667fc5bca34690e3a3975c531c470ae485f8daf224b634288280bc9ea1"} Mar 19 09:49:11.425574 master-0 kubenswrapper[27819]: I0319 09:49:11.424624 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"f5d4182451317d418343b61ba44bf31888e7ee1c7ae7f6cdf69f645873021205"} Mar 19 09:49:11.425574 master-0 kubenswrapper[27819]: I0319 09:49:11.424677 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"53eef9d1-14df-45aa-ae9b-bc7583066d10","Type":"ContainerStarted","Data":"d650380c6d225b1a98e5fb1bf4e3451616611f6f97ec4ee7a9da49ea4a5aca53"} Mar 19 09:49:11.504305 master-0 kubenswrapper[27819]: I0319 09:49:11.504231 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.200421827 podStartE2EDuration="44.504215402s" podCreationTimestamp="2026-03-19 09:48:27 +0000 UTC" firstStartedPulling="2026-03-19 09:48:46.611471733 +0000 UTC m=+911.533049415" lastFinishedPulling="2026-03-19 09:49:08.915265308 +0000 UTC m=+933.836842990" observedRunningTime="2026-03-19 09:49:11.503169256 +0000 UTC m=+936.424746948" watchObservedRunningTime="2026-03-19 09:49:11.504215402 +0000 UTC m=+936.425793094" Mar 19 09:49:11.819395 master-0 kubenswrapper[27819]: I0319 09:49:11.819277 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85df44b749-4nlmd"] Mar 19 09:49:11.863075 master-0 kubenswrapper[27819]: I0319 09:49:11.862994 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-695dc88c97-j9msf"] Mar 19 09:49:11.865056 master-0 kubenswrapper[27819]: I0319 09:49:11.865014 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:11.867830 master-0 kubenswrapper[27819]: I0319 09:49:11.867794 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 19 09:49:11.882941 master-0 kubenswrapper[27819]: I0319 09:49:11.882889 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-695dc88c97-j9msf"] Mar 19 09:49:11.930403 master-0 kubenswrapper[27819]: I0319 09:49:11.927673 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-config\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:11.930403 master-0 kubenswrapper[27819]: I0319 09:49:11.927769 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2fr8\" (UniqueName: \"kubernetes.io/projected/3181847f-14a1-4821-b035-34fdf6008920-kube-api-access-p2fr8\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:11.930403 master-0 kubenswrapper[27819]: I0319 09:49:11.927828 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-swift-storage-0\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:11.930403 master-0 kubenswrapper[27819]: I0319 09:49:11.927886 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-nb\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:11.930403 master-0 kubenswrapper[27819]: I0319 09:49:11.928028 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-svc\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:11.930403 master-0 kubenswrapper[27819]: I0319 09:49:11.928106 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-sb\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.029398 master-0 kubenswrapper[27819]: I0319 09:49:12.029301 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-swift-storage-0\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.029398 master-0 kubenswrapper[27819]: I0319 09:49:12.029380 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-nb\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.029880 master-0 kubenswrapper[27819]: I0319 09:49:12.029666 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-svc\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.029880 master-0 kubenswrapper[27819]: I0319 09:49:12.029819 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-sb\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.030710 master-0 kubenswrapper[27819]: I0319 09:49:12.030653 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-swift-storage-0\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.030833 master-0 kubenswrapper[27819]: I0319 09:49:12.030694 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-nb\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.030833 master-0 kubenswrapper[27819]: I0319 09:49:12.030712 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-config\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.030833 master-0 kubenswrapper[27819]: I0319 09:49:12.030812 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-svc\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.030981 master-0 kubenswrapper[27819]: I0319 09:49:12.030817 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-sb\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.030981 master-0 kubenswrapper[27819]: I0319 09:49:12.030867 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2fr8\" (UniqueName: \"kubernetes.io/projected/3181847f-14a1-4821-b035-34fdf6008920-kube-api-access-p2fr8\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.031652 master-0 kubenswrapper[27819]: I0319 09:49:12.031312 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-config\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.050685 master-0 kubenswrapper[27819]: I0319 09:49:12.048376 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2fr8\" (UniqueName: \"kubernetes.io/projected/3181847f-14a1-4821-b035-34fdf6008920-kube-api-access-p2fr8\") pod \"dnsmasq-dns-695dc88c97-j9msf\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.234584 master-0 kubenswrapper[27819]: I0319 09:49:12.234519 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:12.459700 master-0 kubenswrapper[27819]: I0319 09:49:12.458900 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" event={"ID":"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b","Type":"ContainerStarted","Data":"94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c"} Mar 19 09:49:12.459700 master-0 kubenswrapper[27819]: I0319 09:49:12.458965 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:12.644457 master-0 kubenswrapper[27819]: I0319 09:49:12.643923 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" podStartSLOduration=3.643905945 podStartE2EDuration="3.643905945s" podCreationTimestamp="2026-03-19 09:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:12.483565052 +0000 UTC m=+937.405142744" watchObservedRunningTime="2026-03-19 09:49:12.643905945 +0000 UTC m=+937.565483637" Mar 19 09:49:12.653206 master-0 kubenswrapper[27819]: I0319 09:49:12.653154 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-695dc88c97-j9msf"] Mar 19 09:49:13.467485 master-0 kubenswrapper[27819]: I0319 09:49:13.467427 27819 generic.go:334] "Generic (PLEG): container finished" podID="3181847f-14a1-4821-b035-34fdf6008920" containerID="99c3ec0ca96f2f1f044803b8e99e1957ca7a62d976a5815c7ebf29735ad549ef" exitCode=0 Mar 19 09:49:13.468081 master-0 kubenswrapper[27819]: I0319 09:49:13.467478 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" event={"ID":"3181847f-14a1-4821-b035-34fdf6008920","Type":"ContainerDied","Data":"99c3ec0ca96f2f1f044803b8e99e1957ca7a62d976a5815c7ebf29735ad549ef"} Mar 19 09:49:13.468081 master-0 kubenswrapper[27819]: I0319 09:49:13.467560 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" event={"ID":"3181847f-14a1-4821-b035-34fdf6008920","Type":"ContainerStarted","Data":"fda9fe38e147516c5ea7ad9d017070b30e4dd1a78a39f7588b4e3892b400a787"} Mar 19 09:49:13.468081 master-0 kubenswrapper[27819]: I0319 09:49:13.467665 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" podUID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerName="dnsmasq-dns" containerID="cri-o://94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c" gracePeriod=10 Mar 19 09:49:13.934977 master-0 kubenswrapper[27819]: I0319 09:49:13.934904 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:14.080001 master-0 kubenswrapper[27819]: I0319 09:49:14.079855 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v29v\" (UniqueName: \"kubernetes.io/projected/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-kube-api-access-6v29v\") pod \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " Mar 19 09:49:14.080201 master-0 kubenswrapper[27819]: I0319 09:49:14.080072 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-sb\") pod \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " Mar 19 09:49:14.080201 master-0 kubenswrapper[27819]: I0319 09:49:14.080114 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-nb\") pod \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " Mar 19 09:49:14.080201 master-0 kubenswrapper[27819]: I0319 09:49:14.080195 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-config\") pod \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " Mar 19 09:49:14.080318 master-0 kubenswrapper[27819]: I0319 09:49:14.080291 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-dns-svc\") pod \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\" (UID: \"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b\") " Mar 19 09:49:14.084349 master-0 kubenswrapper[27819]: I0319 09:49:14.084288 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-kube-api-access-6v29v" (OuterVolumeSpecName: "kube-api-access-6v29v") pod "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" (UID: "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b"). InnerVolumeSpecName "kube-api-access-6v29v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:14.122428 master-0 kubenswrapper[27819]: I0319 09:49:14.122349 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-config" (OuterVolumeSpecName: "config") pod "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" (UID: "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:14.128324 master-0 kubenswrapper[27819]: I0319 09:49:14.128247 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" (UID: "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:14.131654 master-0 kubenswrapper[27819]: I0319 09:49:14.131612 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" (UID: "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:14.135617 master-0 kubenswrapper[27819]: I0319 09:49:14.135311 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" (UID: "d5cc4469-9b5f-4b86-b146-88d9d3d51f5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:14.182962 master-0 kubenswrapper[27819]: I0319 09:49:14.182900 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:14.182962 master-0 kubenswrapper[27819]: I0319 09:49:14.182945 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:14.182962 master-0 kubenswrapper[27819]: I0319 09:49:14.182960 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v29v\" (UniqueName: \"kubernetes.io/projected/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-kube-api-access-6v29v\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:14.182962 master-0 kubenswrapper[27819]: I0319 09:49:14.182973 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:14.182962 master-0 kubenswrapper[27819]: I0319 09:49:14.182983 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:14.478882 master-0 kubenswrapper[27819]: I0319 09:49:14.478819 27819 generic.go:334] "Generic (PLEG): container finished" podID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerID="94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c" exitCode=0 Mar 19 09:49:14.479398 master-0 kubenswrapper[27819]: I0319 09:49:14.478879 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" Mar 19 09:49:14.479398 master-0 kubenswrapper[27819]: I0319 09:49:14.478884 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" event={"ID":"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b","Type":"ContainerDied","Data":"94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c"} Mar 19 09:49:14.479398 master-0 kubenswrapper[27819]: I0319 09:49:14.478967 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85df44b749-4nlmd" event={"ID":"d5cc4469-9b5f-4b86-b146-88d9d3d51f5b","Type":"ContainerDied","Data":"54b70b667fc5bca34690e3a3975c531c470ae485f8daf224b634288280bc9ea1"} Mar 19 09:49:14.479398 master-0 kubenswrapper[27819]: I0319 09:49:14.478990 27819 scope.go:117] "RemoveContainer" containerID="94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c" Mar 19 09:49:14.483627 master-0 kubenswrapper[27819]: I0319 09:49:14.483586 27819 generic.go:334] "Generic (PLEG): container finished" podID="206adece-9cbb-4c73-a7fa-b2ba36acbbee" containerID="cda346a18e3ae817a45f054731ddfc16d70398e0a32ef37a8200dfe237c6efb2" exitCode=0 Mar 19 09:49:14.483720 master-0 kubenswrapper[27819]: I0319 09:49:14.483663 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4c4xb" event={"ID":"206adece-9cbb-4c73-a7fa-b2ba36acbbee","Type":"ContainerDied","Data":"cda346a18e3ae817a45f054731ddfc16d70398e0a32ef37a8200dfe237c6efb2"} Mar 19 09:49:14.485938 master-0 kubenswrapper[27819]: I0319 09:49:14.485895 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" event={"ID":"3181847f-14a1-4821-b035-34fdf6008920","Type":"ContainerStarted","Data":"f2727583c352932b102c6aa0beed53583b272dd09f64c25e329d4e428233d242"} Mar 19 09:49:14.486238 master-0 kubenswrapper[27819]: I0319 09:49:14.486039 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:14.500403 master-0 kubenswrapper[27819]: I0319 09:49:14.500356 27819 scope.go:117] "RemoveContainer" containerID="7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e" Mar 19 09:49:14.536828 master-0 kubenswrapper[27819]: I0319 09:49:14.533489 27819 scope.go:117] "RemoveContainer" containerID="94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c" Mar 19 09:49:14.536828 master-0 kubenswrapper[27819]: E0319 09:49:14.534500 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c\": container with ID starting with 94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c not found: ID does not exist" containerID="94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c" Mar 19 09:49:14.536828 master-0 kubenswrapper[27819]: I0319 09:49:14.534577 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c"} err="failed to get container status \"94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c\": rpc error: code = NotFound desc = could not find container \"94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c\": container with ID starting with 94367114fafdd7bece5f35d377081e9b638205cf396ed252f6709ec79db6454c not found: ID does not exist" Mar 19 09:49:14.536828 master-0 kubenswrapper[27819]: I0319 09:49:14.534615 27819 scope.go:117] "RemoveContainer" containerID="7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e" Mar 19 09:49:14.536828 master-0 kubenswrapper[27819]: E0319 09:49:14.535722 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e\": container with ID starting with 7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e not found: ID does not exist" containerID="7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e" Mar 19 09:49:14.536828 master-0 kubenswrapper[27819]: I0319 09:49:14.535753 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e"} err="failed to get container status \"7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e\": rpc error: code = NotFound desc = could not find container \"7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e\": container with ID starting with 7d221a2483aa0fa55ecab976f750728f23fcb7e68346f7693e95340dab4b857e not found: ID does not exist" Mar 19 09:49:14.559082 master-0 kubenswrapper[27819]: I0319 09:49:14.559018 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85df44b749-4nlmd"] Mar 19 09:49:14.571037 master-0 kubenswrapper[27819]: I0319 09:49:14.570970 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85df44b749-4nlmd"] Mar 19 09:49:14.571530 master-0 kubenswrapper[27819]: I0319 09:49:14.571475 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" podStartSLOduration=3.571465117 podStartE2EDuration="3.571465117s" podCreationTimestamp="2026-03-19 09:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:14.565492192 +0000 UTC m=+939.487069904" watchObservedRunningTime="2026-03-19 09:49:14.571465117 +0000 UTC m=+939.493042809" Mar 19 09:49:15.290659 master-0 kubenswrapper[27819]: I0319 09:49:15.290599 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" path="/var/lib/kubelet/pods/d5cc4469-9b5f-4b86-b146-88d9d3d51f5b/volumes" Mar 19 09:49:16.037688 master-0 kubenswrapper[27819]: I0319 09:49:16.037638 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:16.126205 master-0 kubenswrapper[27819]: I0319 09:49:16.126128 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-config-data\") pod \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " Mar 19 09:49:16.126427 master-0 kubenswrapper[27819]: I0319 09:49:16.126327 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcbpg\" (UniqueName: \"kubernetes.io/projected/206adece-9cbb-4c73-a7fa-b2ba36acbbee-kube-api-access-mcbpg\") pod \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " Mar 19 09:49:16.126879 master-0 kubenswrapper[27819]: I0319 09:49:16.126853 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-combined-ca-bundle\") pod \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\" (UID: \"206adece-9cbb-4c73-a7fa-b2ba36acbbee\") " Mar 19 09:49:16.129363 master-0 kubenswrapper[27819]: I0319 09:49:16.129220 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206adece-9cbb-4c73-a7fa-b2ba36acbbee-kube-api-access-mcbpg" (OuterVolumeSpecName: "kube-api-access-mcbpg") pod "206adece-9cbb-4c73-a7fa-b2ba36acbbee" (UID: "206adece-9cbb-4c73-a7fa-b2ba36acbbee"). InnerVolumeSpecName "kube-api-access-mcbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:16.152151 master-0 kubenswrapper[27819]: I0319 09:49:16.152084 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "206adece-9cbb-4c73-a7fa-b2ba36acbbee" (UID: "206adece-9cbb-4c73-a7fa-b2ba36acbbee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:16.175798 master-0 kubenswrapper[27819]: I0319 09:49:16.174825 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-config-data" (OuterVolumeSpecName: "config-data") pod "206adece-9cbb-4c73-a7fa-b2ba36acbbee" (UID: "206adece-9cbb-4c73-a7fa-b2ba36acbbee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:16.230089 master-0 kubenswrapper[27819]: I0319 09:49:16.229994 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcbpg\" (UniqueName: \"kubernetes.io/projected/206adece-9cbb-4c73-a7fa-b2ba36acbbee-kube-api-access-mcbpg\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:16.230089 master-0 kubenswrapper[27819]: I0319 09:49:16.230051 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:16.230089 master-0 kubenswrapper[27819]: I0319 09:49:16.230065 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206adece-9cbb-4c73-a7fa-b2ba36acbbee-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:16.507800 master-0 kubenswrapper[27819]: I0319 09:49:16.507672 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4c4xb" event={"ID":"206adece-9cbb-4c73-a7fa-b2ba36acbbee","Type":"ContainerDied","Data":"c0a87fdd2d523c91bc503c72336eae0ad49aa7fa618257ad2ceee1c709bad0eb"} Mar 19 09:49:16.507800 master-0 kubenswrapper[27819]: I0319 09:49:16.507725 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0a87fdd2d523c91bc503c72336eae0ad49aa7fa618257ad2ceee1c709bad0eb" Mar 19 09:49:16.507800 master-0 kubenswrapper[27819]: I0319 09:49:16.507696 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4c4xb" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: I0319 09:49:17.162836 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d4bnt"] Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: E0319 09:49:17.163474 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206adece-9cbb-4c73-a7fa-b2ba36acbbee" containerName="keystone-db-sync" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: I0319 09:49:17.163494 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="206adece-9cbb-4c73-a7fa-b2ba36acbbee" containerName="keystone-db-sync" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: E0319 09:49:17.163532 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerName="dnsmasq-dns" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: I0319 09:49:17.163562 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerName="dnsmasq-dns" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: E0319 09:49:17.163588 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerName="init" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: I0319 09:49:17.163598 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerName="init" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: I0319 09:49:17.163874 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="206adece-9cbb-4c73-a7fa-b2ba36acbbee" containerName="keystone-db-sync" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: I0319 09:49:17.163920 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5cc4469-9b5f-4b86-b146-88d9d3d51f5b" containerName="dnsmasq-dns" Mar 19 09:49:17.165153 master-0 kubenswrapper[27819]: I0319 09:49:17.164799 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.170805 master-0 kubenswrapper[27819]: I0319 09:49:17.170753 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 09:49:17.171301 master-0 kubenswrapper[27819]: I0319 09:49:17.171286 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 09:49:17.171701 master-0 kubenswrapper[27819]: I0319 09:49:17.170891 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 09:49:17.171902 master-0 kubenswrapper[27819]: I0319 09:49:17.171888 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 09:49:17.211879 master-0 kubenswrapper[27819]: I0319 09:49:17.210617 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d4bnt"] Mar 19 09:49:17.286561 master-0 kubenswrapper[27819]: I0319 09:49:17.282700 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-fernet-keys\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.286561 master-0 kubenswrapper[27819]: I0319 09:49:17.282765 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9wvp\" (UniqueName: \"kubernetes.io/projected/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-kube-api-access-g9wvp\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.286561 master-0 kubenswrapper[27819]: I0319 09:49:17.282790 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-config-data\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.286561 master-0 kubenswrapper[27819]: I0319 09:49:17.282849 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-combined-ca-bundle\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.286561 master-0 kubenswrapper[27819]: I0319 09:49:17.282911 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-scripts\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.286561 master-0 kubenswrapper[27819]: I0319 09:49:17.282971 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-credential-keys\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.365563 master-0 kubenswrapper[27819]: I0319 09:49:17.362645 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695dc88c97-j9msf"] Mar 19 09:49:17.365563 master-0 kubenswrapper[27819]: I0319 09:49:17.362876 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" podUID="3181847f-14a1-4821-b035-34fdf6008920" containerName="dnsmasq-dns" containerID="cri-o://f2727583c352932b102c6aa0beed53583b272dd09f64c25e329d4e428233d242" gracePeriod=10 Mar 19 09:49:17.388348 master-0 kubenswrapper[27819]: I0319 09:49:17.386440 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-fernet-keys\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.388348 master-0 kubenswrapper[27819]: I0319 09:49:17.386497 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9wvp\" (UniqueName: \"kubernetes.io/projected/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-kube-api-access-g9wvp\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.388348 master-0 kubenswrapper[27819]: I0319 09:49:17.386528 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-config-data\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.388348 master-0 kubenswrapper[27819]: I0319 09:49:17.386639 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-combined-ca-bundle\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.388348 master-0 kubenswrapper[27819]: I0319 09:49:17.386706 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-scripts\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.388348 master-0 kubenswrapper[27819]: I0319 09:49:17.386767 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-credential-keys\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.403986 master-0 kubenswrapper[27819]: I0319 09:49:17.403724 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dc456b85-29flf"] Mar 19 09:49:17.406284 master-0 kubenswrapper[27819]: I0319 09:49:17.405578 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.424603 master-0 kubenswrapper[27819]: I0319 09:49:17.424255 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-credential-keys\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.449446 master-0 kubenswrapper[27819]: I0319 09:49:17.449401 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-combined-ca-bundle\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.461695 master-0 kubenswrapper[27819]: I0319 09:49:17.460840 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-scripts\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.461695 master-0 kubenswrapper[27819]: I0319 09:49:17.461173 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9wvp\" (UniqueName: \"kubernetes.io/projected/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-kube-api-access-g9wvp\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.461995 master-0 kubenswrapper[27819]: I0319 09:49:17.461911 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-config-data\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.464718 master-0 kubenswrapper[27819]: I0319 09:49:17.462333 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-fernet-keys\") pod \"keystone-bootstrap-d4bnt\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.469136 master-0 kubenswrapper[27819]: I0319 09:49:17.469103 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dc456b85-29flf"] Mar 19 09:49:17.503728 master-0 kubenswrapper[27819]: I0319 09:49:17.490224 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fb9\" (UniqueName: \"kubernetes.io/projected/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-kube-api-access-98fb9\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.503728 master-0 kubenswrapper[27819]: I0319 09:49:17.490296 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-svc\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.503728 master-0 kubenswrapper[27819]: I0319 09:49:17.490319 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-nb\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.503728 master-0 kubenswrapper[27819]: I0319 09:49:17.490368 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-sb\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.503728 master-0 kubenswrapper[27819]: I0319 09:49:17.490418 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-swift-storage-0\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.503728 master-0 kubenswrapper[27819]: I0319 09:49:17.490503 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-config\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.517608 master-0 kubenswrapper[27819]: I0319 09:49:17.517554 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-fx99f"] Mar 19 09:49:17.526074 master-0 kubenswrapper[27819]: I0319 09:49:17.523907 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:17.577487 master-0 kubenswrapper[27819]: I0319 09:49:17.573769 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4bmsm"] Mar 19 09:49:17.583678 master-0 kubenswrapper[27819]: I0319 09:49:17.580682 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.594226 master-0 kubenswrapper[27819]: I0319 09:49:17.594147 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-config\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.594420 master-0 kubenswrapper[27819]: I0319 09:49:17.594342 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fb9\" (UniqueName: \"kubernetes.io/projected/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-kube-api-access-98fb9\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.594420 master-0 kubenswrapper[27819]: I0319 09:49:17.594400 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-svc\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.594570 master-0 kubenswrapper[27819]: I0319 09:49:17.594424 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-nb\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.594570 master-0 kubenswrapper[27819]: I0319 09:49:17.594503 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-sb\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.594665 master-0 kubenswrapper[27819]: I0319 09:49:17.594624 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-swift-storage-0\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.599927 master-0 kubenswrapper[27819]: I0319 09:49:17.596810 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-db-sync-fvnxz"] Mar 19 09:49:17.600486 master-0 kubenswrapper[27819]: I0319 09:49:17.600399 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 19 09:49:17.600728 master-0 kubenswrapper[27819]: I0319 09:49:17.600417 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 19 09:49:17.602021 master-0 kubenswrapper[27819]: I0319 09:49:17.601981 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-swift-storage-0\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.615572 master-0 kubenswrapper[27819]: I0319 09:49:17.603990 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.615572 master-0 kubenswrapper[27819]: I0319 09:49:17.607159 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-nb\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.615572 master-0 kubenswrapper[27819]: I0319 09:49:17.607364 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-config\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.615572 master-0 kubenswrapper[27819]: I0319 09:49:17.607449 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-sb\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.615572 master-0 kubenswrapper[27819]: I0319 09:49:17.609248 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-svc\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.622567 master-0 kubenswrapper[27819]: I0319 09:49:17.617801 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-scripts" Mar 19 09:49:17.622567 master-0 kubenswrapper[27819]: I0319 09:49:17.618054 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-config-data" Mar 19 09:49:17.622567 master-0 kubenswrapper[27819]: I0319 09:49:17.621053 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:17.658477 master-0 kubenswrapper[27819]: I0319 09:49:17.657125 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-fx99f"] Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.701689 27819 generic.go:334] "Generic (PLEG): container finished" podID="3181847f-14a1-4821-b035-34fdf6008920" containerID="f2727583c352932b102c6aa0beed53583b272dd09f64c25e329d4e428233d242" exitCode=0 Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.701738 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" event={"ID":"3181847f-14a1-4821-b035-34fdf6008920","Type":"ContainerDied","Data":"f2727583c352932b102c6aa0beed53583b272dd09f64c25e329d4e428233d242"} Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.705163 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqf7v\" (UniqueName: \"kubernetes.io/projected/cdbc1696-8633-4090-93f5-84b5ea19bc9a-kube-api-access-gqf7v\") pod \"ironic-db-create-fx99f\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.705298 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbc1696-8633-4090-93f5-84b5ea19bc9a-operator-scripts\") pod \"ironic-db-create-fx99f\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.705553 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-db-sync-config-data\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.705781 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-config-data\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.705913 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-combined-ca-bundle\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.706866 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-scripts\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.706938 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnlc\" (UniqueName: \"kubernetes.io/projected/5ef7ab15-9976-4989-b837-55f0b27ee661-kube-api-access-bgnlc\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.707219 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdrj\" (UniqueName: \"kubernetes.io/projected/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-kube-api-access-4cdrj\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.707275 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ef7ab15-9976-4989-b837-55f0b27ee661-etc-machine-id\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.707313 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-combined-ca-bundle\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.709571 master-0 kubenswrapper[27819]: I0319 09:49:17.707392 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-config\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.723834 master-0 kubenswrapper[27819]: I0319 09:49:17.723646 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fb9\" (UniqueName: \"kubernetes.io/projected/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-kube-api-access-98fb9\") pod \"dnsmasq-dns-75dc456b85-29flf\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.765248 master-0 kubenswrapper[27819]: I0319 09:49:17.765203 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4bmsm"] Mar 19 09:49:17.800490 master-0 kubenswrapper[27819]: I0319 09:49:17.799682 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-db-sync-fvnxz"] Mar 19 09:49:17.821893 master-0 kubenswrapper[27819]: I0319 09:49:17.821815 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-569f-account-create-update-6h6km"] Mar 19 09:49:17.824639 master-0 kubenswrapper[27819]: I0319 09:49:17.824575 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-scripts\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.824987 master-0 kubenswrapper[27819]: I0319 09:49:17.824941 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgnlc\" (UniqueName: \"kubernetes.io/projected/5ef7ab15-9976-4989-b837-55f0b27ee661-kube-api-access-bgnlc\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.825244 master-0 kubenswrapper[27819]: I0319 09:49:17.825181 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:17.825783 master-0 kubenswrapper[27819]: I0319 09:49:17.825704 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdrj\" (UniqueName: \"kubernetes.io/projected/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-kube-api-access-4cdrj\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.825863 master-0 kubenswrapper[27819]: I0319 09:49:17.825789 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ef7ab15-9976-4989-b837-55f0b27ee661-etc-machine-id\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.825863 master-0 kubenswrapper[27819]: I0319 09:49:17.825825 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-combined-ca-bundle\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.826141 master-0 kubenswrapper[27819]: I0319 09:49:17.826119 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-config\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.826328 master-0 kubenswrapper[27819]: I0319 09:49:17.826312 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqf7v\" (UniqueName: \"kubernetes.io/projected/cdbc1696-8633-4090-93f5-84b5ea19bc9a-kube-api-access-gqf7v\") pod \"ironic-db-create-fx99f\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:17.826436 master-0 kubenswrapper[27819]: I0319 09:49:17.826423 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbc1696-8633-4090-93f5-84b5ea19bc9a-operator-scripts\") pod \"ironic-db-create-fx99f\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:17.826508 master-0 kubenswrapper[27819]: I0319 09:49:17.826496 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-db-sync-config-data\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.826728 master-0 kubenswrapper[27819]: I0319 09:49:17.826661 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-config-data\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.826840 master-0 kubenswrapper[27819]: I0319 09:49:17.826755 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-combined-ca-bundle\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.829113 master-0 kubenswrapper[27819]: I0319 09:49:17.827494 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ef7ab15-9976-4989-b837-55f0b27ee661-etc-machine-id\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.839751 master-0 kubenswrapper[27819]: I0319 09:49:17.839173 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 19 09:49:17.840329 master-0 kubenswrapper[27819]: I0319 09:49:17.840289 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbc1696-8633-4090-93f5-84b5ea19bc9a-operator-scripts\") pod \"ironic-db-create-fx99f\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:17.842188 master-0 kubenswrapper[27819]: I0319 09:49:17.842138 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-569f-account-create-update-6h6km"] Mar 19 09:49:17.849181 master-0 kubenswrapper[27819]: I0319 09:49:17.847378 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-config\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.862430 master-0 kubenswrapper[27819]: I0319 09:49:17.859870 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqf7v\" (UniqueName: \"kubernetes.io/projected/cdbc1696-8633-4090-93f5-84b5ea19bc9a-kube-api-access-gqf7v\") pod \"ironic-db-create-fx99f\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:17.862430 master-0 kubenswrapper[27819]: I0319 09:49:17.860247 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-scripts\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.871188 master-0 kubenswrapper[27819]: I0319 09:49:17.868975 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdrj\" (UniqueName: \"kubernetes.io/projected/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-kube-api-access-4cdrj\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.882714 master-0 kubenswrapper[27819]: I0319 09:49:17.880870 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-db-sync-config-data\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.882714 master-0 kubenswrapper[27819]: I0319 09:49:17.882261 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-combined-ca-bundle\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.882714 master-0 kubenswrapper[27819]: I0319 09:49:17.882446 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-combined-ca-bundle\") pod \"neutron-db-sync-4bmsm\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:17.899031 master-0 kubenswrapper[27819]: I0319 09:49:17.898982 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgnlc\" (UniqueName: \"kubernetes.io/projected/5ef7ab15-9976-4989-b837-55f0b27ee661-kube-api-access-bgnlc\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.913869 master-0 kubenswrapper[27819]: I0319 09:49:17.913814 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-config-data\") pod \"cinder-255d6-db-sync-fvnxz\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.933455 master-0 kubenswrapper[27819]: I0319 09:49:17.933221 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:17.935180 master-0 kubenswrapper[27819]: I0319 09:49:17.935131 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mm7\" (UniqueName: \"kubernetes.io/projected/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-kube-api-access-q7mm7\") pod \"ironic-569f-account-create-update-6h6km\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:17.936430 master-0 kubenswrapper[27819]: I0319 09:49:17.936410 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-operator-scripts\") pod \"ironic-569f-account-create-update-6h6km\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:17.988017 master-0 kubenswrapper[27819]: I0319 09:49:17.970467 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dc456b85-29flf"] Mar 19 09:49:17.988017 master-0 kubenswrapper[27819]: I0319 09:49:17.971438 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:17.988017 master-0 kubenswrapper[27819]: I0319 09:49:17.983348 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-s6mzj"] Mar 19 09:49:17.988017 master-0 kubenswrapper[27819]: I0319 09:49:17.984781 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:17.988017 master-0 kubenswrapper[27819]: I0319 09:49:17.987899 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 19 09:49:17.989078 master-0 kubenswrapper[27819]: I0319 09:49:17.988522 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 19 09:49:18.017864 master-0 kubenswrapper[27819]: I0319 09:49:17.998920 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c7b89c5-sw8qr"] Mar 19 09:49:18.017864 master-0 kubenswrapper[27819]: I0319 09:49:18.001621 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.017864 master-0 kubenswrapper[27819]: I0319 09:49:18.016837 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:18.022783 master-0 kubenswrapper[27819]: I0319 09:49:18.022717 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s6mzj"] Mar 19 09:49:18.047139 master-0 kubenswrapper[27819]: I0319 09:49:18.047082 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mm7\" (UniqueName: \"kubernetes.io/projected/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-kube-api-access-q7mm7\") pod \"ironic-569f-account-create-update-6h6km\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:18.047339 master-0 kubenswrapper[27819]: I0319 09:49:18.047212 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-operator-scripts\") pod \"ironic-569f-account-create-update-6h6km\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:18.048918 master-0 kubenswrapper[27819]: I0319 09:49:18.048884 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-operator-scripts\") pod \"ironic-569f-account-create-update-6h6km\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:18.059447 master-0 kubenswrapper[27819]: I0319 09:49:18.058654 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c7b89c5-sw8qr"] Mar 19 09:49:18.089522 master-0 kubenswrapper[27819]: I0319 09:49:18.089463 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mm7\" (UniqueName: \"kubernetes.io/projected/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-kube-api-access-q7mm7\") pod \"ironic-569f-account-create-update-6h6km\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149347 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-config\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149397 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-combined-ca-bundle\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149424 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-config-data\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149478 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-sb\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149502 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-nb\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149570 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-svc\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149598 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-scripts\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149631 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abe1e88-82d4-488e-bd25-08cf29f5952e-logs\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149654 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpn99\" (UniqueName: \"kubernetes.io/projected/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-kube-api-access-zpn99\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149694 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2tt\" (UniqueName: \"kubernetes.io/projected/1abe1e88-82d4-488e-bd25-08cf29f5952e-kube-api-access-bk2tt\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.159307 master-0 kubenswrapper[27819]: I0319 09:49:18.149722 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-swift-storage-0\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.187735 master-0 kubenswrapper[27819]: I0319 09:49:18.181912 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:18.189668 master-0 kubenswrapper[27819]: I0319 09:49:18.189214 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.254037 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abe1e88-82d4-488e-bd25-08cf29f5952e-logs\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.254102 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpn99\" (UniqueName: \"kubernetes.io/projected/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-kube-api-access-zpn99\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.254179 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2tt\" (UniqueName: \"kubernetes.io/projected/1abe1e88-82d4-488e-bd25-08cf29f5952e-kube-api-access-bk2tt\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.254227 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-swift-storage-0\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.254295 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-config\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.254310 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-combined-ca-bundle\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.254338 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-config-data\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.256445 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-sb\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.256482 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-nb\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.257016 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-svc\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.257080 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-scripts\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.257746 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-swift-storage-0\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.258574 master-0 kubenswrapper[27819]: I0319 09:49:18.258129 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abe1e88-82d4-488e-bd25-08cf29f5952e-logs\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.260634 master-0 kubenswrapper[27819]: I0319 09:49:18.260158 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-config\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.262395 master-0 kubenswrapper[27819]: I0319 09:49:18.262156 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-sb\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.263868 master-0 kubenswrapper[27819]: I0319 09:49:18.262703 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-nb\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.271145 master-0 kubenswrapper[27819]: I0319 09:49:18.271089 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-svc\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.295418 master-0 kubenswrapper[27819]: I0319 09:49:18.295362 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-scripts\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.299852 master-0 kubenswrapper[27819]: I0319 09:49:18.299816 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-combined-ca-bundle\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.303405 master-0 kubenswrapper[27819]: I0319 09:49:18.303368 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:18.303915 master-0 kubenswrapper[27819]: I0319 09:49:18.303808 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpn99\" (UniqueName: \"kubernetes.io/projected/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-kube-api-access-zpn99\") pod \"dnsmasq-dns-75c7b89c5-sw8qr\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.304128 master-0 kubenswrapper[27819]: I0319 09:49:18.303982 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-config-data\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.305274 master-0 kubenswrapper[27819]: I0319 09:49:18.305229 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2tt\" (UniqueName: \"kubernetes.io/projected/1abe1e88-82d4-488e-bd25-08cf29f5952e-kube-api-access-bk2tt\") pod \"placement-db-sync-s6mzj\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.322968 master-0 kubenswrapper[27819]: I0319 09:49:18.322691 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:18.375445 master-0 kubenswrapper[27819]: I0319 09:49:18.375335 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-sb\") pod \"3181847f-14a1-4821-b035-34fdf6008920\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " Mar 19 09:49:18.375669 master-0 kubenswrapper[27819]: I0319 09:49:18.375566 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-config\") pod \"3181847f-14a1-4821-b035-34fdf6008920\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " Mar 19 09:49:18.375669 master-0 kubenswrapper[27819]: I0319 09:49:18.375601 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-svc\") pod \"3181847f-14a1-4821-b035-34fdf6008920\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " Mar 19 09:49:18.375772 master-0 kubenswrapper[27819]: I0319 09:49:18.375666 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-nb\") pod \"3181847f-14a1-4821-b035-34fdf6008920\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " Mar 19 09:49:18.375820 master-0 kubenswrapper[27819]: I0319 09:49:18.375773 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2fr8\" (UniqueName: \"kubernetes.io/projected/3181847f-14a1-4821-b035-34fdf6008920-kube-api-access-p2fr8\") pod \"3181847f-14a1-4821-b035-34fdf6008920\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " Mar 19 09:49:18.376890 master-0 kubenswrapper[27819]: I0319 09:49:18.375847 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-swift-storage-0\") pod \"3181847f-14a1-4821-b035-34fdf6008920\" (UID: \"3181847f-14a1-4821-b035-34fdf6008920\") " Mar 19 09:49:18.414601 master-0 kubenswrapper[27819]: I0319 09:49:18.411387 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3181847f-14a1-4821-b035-34fdf6008920-kube-api-access-p2fr8" (OuterVolumeSpecName: "kube-api-access-p2fr8") pod "3181847f-14a1-4821-b035-34fdf6008920" (UID: "3181847f-14a1-4821-b035-34fdf6008920"). InnerVolumeSpecName "kube-api-access-p2fr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:18.474418 master-0 kubenswrapper[27819]: I0319 09:49:18.474346 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:18.483432 master-0 kubenswrapper[27819]: I0319 09:49:18.482820 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2fr8\" (UniqueName: \"kubernetes.io/projected/3181847f-14a1-4821-b035-34fdf6008920-kube-api-access-p2fr8\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:18.604284 master-0 kubenswrapper[27819]: I0319 09:49:18.601965 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d4bnt"] Mar 19 09:49:18.647265 master-0 kubenswrapper[27819]: I0319 09:49:18.645722 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-config" (OuterVolumeSpecName: "config") pod "3181847f-14a1-4821-b035-34fdf6008920" (UID: "3181847f-14a1-4821-b035-34fdf6008920"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:18.652360 master-0 kubenswrapper[27819]: I0319 09:49:18.651000 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3181847f-14a1-4821-b035-34fdf6008920" (UID: "3181847f-14a1-4821-b035-34fdf6008920"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:18.658719 master-0 kubenswrapper[27819]: I0319 09:49:18.658654 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3181847f-14a1-4821-b035-34fdf6008920" (UID: "3181847f-14a1-4821-b035-34fdf6008920"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:18.669315 master-0 kubenswrapper[27819]: I0319 09:49:18.669242 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3181847f-14a1-4821-b035-34fdf6008920" (UID: "3181847f-14a1-4821-b035-34fdf6008920"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:18.690062 master-0 kubenswrapper[27819]: I0319 09:49:18.690008 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3181847f-14a1-4821-b035-34fdf6008920" (UID: "3181847f-14a1-4821-b035-34fdf6008920"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:18.692517 master-0 kubenswrapper[27819]: I0319 09:49:18.691354 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:18.692517 master-0 kubenswrapper[27819]: I0319 09:49:18.691388 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:18.692517 master-0 kubenswrapper[27819]: I0319 09:49:18.691400 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:18.692517 master-0 kubenswrapper[27819]: I0319 09:49:18.691415 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:18.692517 master-0 kubenswrapper[27819]: I0319 09:49:18.691428 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3181847f-14a1-4821-b035-34fdf6008920-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:18.715303 master-0 kubenswrapper[27819]: W0319 09:49:18.715219 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba4cae5_c391_4b24_8e04_5a97edc47fe2.slice/crio-a1a65d45cfabf2d9be2ac65763ae11ba1b14cf2bfc83f7e9fef88bcb474083a8 WatchSource:0}: Error finding container a1a65d45cfabf2d9be2ac65763ae11ba1b14cf2bfc83f7e9fef88bcb474083a8: Status 404 returned error can't find the container with id a1a65d45cfabf2d9be2ac65763ae11ba1b14cf2bfc83f7e9fef88bcb474083a8 Mar 19 09:49:18.725818 master-0 kubenswrapper[27819]: I0319 09:49:18.725764 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" event={"ID":"3181847f-14a1-4821-b035-34fdf6008920","Type":"ContainerDied","Data":"fda9fe38e147516c5ea7ad9d017070b30e4dd1a78a39f7588b4e3892b400a787"} Mar 19 09:49:18.725916 master-0 kubenswrapper[27819]: I0319 09:49:18.725838 27819 scope.go:117] "RemoveContainer" containerID="f2727583c352932b102c6aa0beed53583b272dd09f64c25e329d4e428233d242" Mar 19 09:49:18.725916 master-0 kubenswrapper[27819]: I0319 09:49:18.725856 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-695dc88c97-j9msf" Mar 19 09:49:18.891754 master-0 kubenswrapper[27819]: I0319 09:49:18.891508 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dc456b85-29flf"] Mar 19 09:49:18.926918 master-0 kubenswrapper[27819]: I0319 09:49:18.926641 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-695dc88c97-j9msf"] Mar 19 09:49:18.926918 master-0 kubenswrapper[27819]: W0319 09:49:18.926710 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a2b55d_11f5_4b86_ad8e_1b1bb6f6138d.slice/crio-1bb7396c3cd13d3a8b06a0f398a5cb6ebd822920654b843e3124484a3bd7191d WatchSource:0}: Error finding container 1bb7396c3cd13d3a8b06a0f398a5cb6ebd822920654b843e3124484a3bd7191d: Status 404 returned error can't find the container with id 1bb7396c3cd13d3a8b06a0f398a5cb6ebd822920654b843e3124484a3bd7191d Mar 19 09:49:18.926918 master-0 kubenswrapper[27819]: I0319 09:49:18.926768 27819 scope.go:117] "RemoveContainer" containerID="99c3ec0ca96f2f1f044803b8e99e1957ca7a62d976a5815c7ebf29735ad549ef" Mar 19 09:49:19.009264 master-0 kubenswrapper[27819]: I0319 09:49:19.008982 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-695dc88c97-j9msf"] Mar 19 09:49:19.154090 master-0 kubenswrapper[27819]: I0319 09:49:19.154039 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-fx99f"] Mar 19 09:49:19.171274 master-0 kubenswrapper[27819]: I0319 09:49:19.171161 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-db-sync-fvnxz"] Mar 19 09:49:19.192822 master-0 kubenswrapper[27819]: W0319 09:49:19.192676 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdbc1696_8633_4090_93f5_84b5ea19bc9a.slice/crio-5099aed382e9565b92d3ae6db17c3214164c57c5e0c610f77b5b3a4de3f136c0 WatchSource:0}: Error finding container 5099aed382e9565b92d3ae6db17c3214164c57c5e0c610f77b5b3a4de3f136c0: Status 404 returned error can't find the container with id 5099aed382e9565b92d3ae6db17c3214164c57c5e0c610f77b5b3a4de3f136c0 Mar 19 09:49:19.336368 master-0 kubenswrapper[27819]: I0319 09:49:19.334959 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3181847f-14a1-4821-b035-34fdf6008920" path="/var/lib/kubelet/pods/3181847f-14a1-4821-b035-34fdf6008920/volumes" Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: I0319 09:49:19.385692 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: E0319 09:49:19.386168 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3181847f-14a1-4821-b035-34fdf6008920" containerName="init" Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: I0319 09:49:19.386187 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="3181847f-14a1-4821-b035-34fdf6008920" containerName="init" Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: E0319 09:49:19.386241 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3181847f-14a1-4821-b035-34fdf6008920" containerName="dnsmasq-dns" Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: I0319 09:49:19.386249 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="3181847f-14a1-4821-b035-34fdf6008920" containerName="dnsmasq-dns" Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: I0319 09:49:19.386493 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="3181847f-14a1-4821-b035-34fdf6008920" containerName="dnsmasq-dns" Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: I0319 09:49:19.387702 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.391973 master-0 kubenswrapper[27819]: I0319 09:49:19.389720 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 09:49:19.392387 master-0 kubenswrapper[27819]: I0319 09:49:19.392355 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-ae80b-default-external-config-data" Mar 19 09:49:19.420301 master-0 kubenswrapper[27819]: I0319 09:49:19.404607 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 19 09:49:19.420301 master-0 kubenswrapper[27819]: I0319 09:49:19.412010 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:19.441668 master-0 kubenswrapper[27819]: I0319 09:49:19.441587 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4bmsm"] Mar 19 09:49:19.508637 master-0 kubenswrapper[27819]: I0319 09:49:19.507838 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-569f-account-create-update-6h6km"] Mar 19 09:49:19.548257 master-0 kubenswrapper[27819]: I0319 09:49:19.548192 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.548257 master-0 kubenswrapper[27819]: I0319 09:49:19.548267 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.548649 master-0 kubenswrapper[27819]: I0319 09:49:19.548288 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.548649 master-0 kubenswrapper[27819]: I0319 09:49:19.548317 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.548649 master-0 kubenswrapper[27819]: I0319 09:49:19.548356 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.548649 master-0 kubenswrapper[27819]: I0319 09:49:19.548388 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.548649 master-0 kubenswrapper[27819]: I0319 09:49:19.548479 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtk6\" (UniqueName: \"kubernetes.io/projected/7fca1c9e-90e0-4d27-82a9-503dd075744b-kube-api-access-wgtk6\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.548649 master-0 kubenswrapper[27819]: I0319 09:49:19.548507 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.651971 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgtk6\" (UniqueName: \"kubernetes.io/projected/7fca1c9e-90e0-4d27-82a9-503dd075744b-kube-api-access-wgtk6\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.652065 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.652107 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.652162 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.652189 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.652218 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.652271 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.652310 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.653297 master-0 kubenswrapper[27819]: I0319 09:49:19.653136 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.655078 master-0 kubenswrapper[27819]: I0319 09:49:19.655044 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.658601 master-0 kubenswrapper[27819]: I0319 09:49:19.658257 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.662463 master-0 kubenswrapper[27819]: I0319 09:49:19.660080 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:49:19.662463 master-0 kubenswrapper[27819]: I0319 09:49:19.660156 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d8a587592d303f8470fc6f13326b5360e6df71aa3ac25c2d7cd8ffda26d20834/globalmount\"" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.662463 master-0 kubenswrapper[27819]: I0319 09:49:19.662396 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.667491 master-0 kubenswrapper[27819]: I0319 09:49:19.667440 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.718719 master-0 kubenswrapper[27819]: I0319 09:49:19.683371 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgtk6\" (UniqueName: \"kubernetes.io/projected/7fca1c9e-90e0-4d27-82a9-503dd075744b-kube-api-access-wgtk6\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.718719 master-0 kubenswrapper[27819]: I0319 09:49:19.683970 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:19.718719 master-0 kubenswrapper[27819]: I0319 09:49:19.705937 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c7b89c5-sw8qr"] Mar 19 09:49:19.724053 master-0 kubenswrapper[27819]: W0319 09:49:19.723688 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5934b50c_8a57_4df0_83a5_a6cf7279d7f8.slice/crio-fa4763b947d4afe309fab8e168158ad1b36d0c981ae0a248a54ccb22b12ab041 WatchSource:0}: Error finding container fa4763b947d4afe309fab8e168158ad1b36d0c981ae0a248a54ccb22b12ab041: Status 404 returned error can't find the container with id fa4763b947d4afe309fab8e168158ad1b36d0c981ae0a248a54ccb22b12ab041 Mar 19 09:49:19.760167 master-0 kubenswrapper[27819]: I0319 09:49:19.760096 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-569f-account-create-update-6h6km" event={"ID":"825e20ea-e29b-4aef-a7ab-3c3c92147e1f","Type":"ContainerStarted","Data":"e4742fa69c839e90c8551c6e12e8ac0f5b11584abe94e0d61e8c4b10495949bb"} Mar 19 09:49:19.766379 master-0 kubenswrapper[27819]: I0319 09:49:19.766333 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-fx99f" event={"ID":"cdbc1696-8633-4090-93f5-84b5ea19bc9a","Type":"ContainerStarted","Data":"4dd2ab6395f5552881bbd2f281e57913e3797d55720f18a136da30f1cd9b2f42"} Mar 19 09:49:19.766379 master-0 kubenswrapper[27819]: I0319 09:49:19.766383 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-fx99f" event={"ID":"cdbc1696-8633-4090-93f5-84b5ea19bc9a","Type":"ContainerStarted","Data":"5099aed382e9565b92d3ae6db17c3214164c57c5e0c610f77b5b3a4de3f136c0"} Mar 19 09:49:19.775701 master-0 kubenswrapper[27819]: I0319 09:49:19.775660 27819 generic.go:334] "Generic (PLEG): container finished" podID="c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" containerID="9174f56b5f5ad32903a91ebdb7289a01d66f52c6d5adc6eae00bae15cf39ee2b" exitCode=0 Mar 19 09:49:19.776017 master-0 kubenswrapper[27819]: I0319 09:49:19.775723 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dc456b85-29flf" event={"ID":"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d","Type":"ContainerDied","Data":"9174f56b5f5ad32903a91ebdb7289a01d66f52c6d5adc6eae00bae15cf39ee2b"} Mar 19 09:49:19.776017 master-0 kubenswrapper[27819]: I0319 09:49:19.775748 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dc456b85-29flf" event={"ID":"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d","Type":"ContainerStarted","Data":"1bb7396c3cd13d3a8b06a0f398a5cb6ebd822920654b843e3124484a3bd7191d"} Mar 19 09:49:19.786664 master-0 kubenswrapper[27819]: I0319 09:49:19.784074 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d4bnt" event={"ID":"6ba4cae5-c391-4b24-8e04-5a97edc47fe2","Type":"ContainerStarted","Data":"bc4f09ab2712fd65e4041d58288c309b337c057e27b8efa27bd516d2a8320730"} Mar 19 09:49:19.786664 master-0 kubenswrapper[27819]: I0319 09:49:19.784139 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d4bnt" event={"ID":"6ba4cae5-c391-4b24-8e04-5a97edc47fe2","Type":"ContainerStarted","Data":"a1a65d45cfabf2d9be2ac65763ae11ba1b14cf2bfc83f7e9fef88bcb474083a8"} Mar 19 09:49:19.791793 master-0 kubenswrapper[27819]: I0319 09:49:19.791720 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4bmsm" event={"ID":"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0","Type":"ContainerStarted","Data":"bc60a964c0d0a77cfa1cdc6cf507a6dda779ece7a767966a32c80fc51d95303d"} Mar 19 09:49:19.815494 master-0 kubenswrapper[27819]: I0319 09:49:19.812038 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" event={"ID":"5934b50c-8a57-4df0-83a5-a6cf7279d7f8","Type":"ContainerStarted","Data":"fa4763b947d4afe309fab8e168158ad1b36d0c981ae0a248a54ccb22b12ab041"} Mar 19 09:49:19.815494 master-0 kubenswrapper[27819]: I0319 09:49:19.812648 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-create-fx99f" podStartSLOduration=2.812629945 podStartE2EDuration="2.812629945s" podCreationTimestamp="2026-03-19 09:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:19.804467144 +0000 UTC m=+944.726044846" watchObservedRunningTime="2026-03-19 09:49:19.812629945 +0000 UTC m=+944.734207637" Mar 19 09:49:19.835789 master-0 kubenswrapper[27819]: I0319 09:49:19.832958 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-db-sync-fvnxz" event={"ID":"5ef7ab15-9976-4989-b837-55f0b27ee661","Type":"ContainerStarted","Data":"34ea14cbeb5ab13b8f49723bc477859d2466fe1aa9f9d5724b97ece954bb70d6"} Mar 19 09:49:19.835789 master-0 kubenswrapper[27819]: I0319 09:49:19.833607 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d4bnt" podStartSLOduration=2.833578257 podStartE2EDuration="2.833578257s" podCreationTimestamp="2026-03-19 09:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:19.833370042 +0000 UTC m=+944.754947754" watchObservedRunningTime="2026-03-19 09:49:19.833578257 +0000 UTC m=+944.755155949" Mar 19 09:49:19.921571 master-0 kubenswrapper[27819]: W0319 09:49:19.920844 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1abe1e88_82d4_488e_bd25_08cf29f5952e.slice/crio-c8ff4ddc05916d2532246f4d2e2c1acef87e215e94c25e6ad68fd0cf2be0d51a WatchSource:0}: Error finding container c8ff4ddc05916d2532246f4d2e2c1acef87e215e94c25e6ad68fd0cf2be0d51a: Status 404 returned error can't find the container with id c8ff4ddc05916d2532246f4d2e2c1acef87e215e94c25e6ad68fd0cf2be0d51a Mar 19 09:49:19.926416 master-0 kubenswrapper[27819]: I0319 09:49:19.923899 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-s6mzj"] Mar 19 09:49:20.286417 master-0 kubenswrapper[27819]: I0319 09:49:20.286092 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:20.378324 master-0 kubenswrapper[27819]: I0319 09:49:20.378261 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-swift-storage-0\") pod \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " Mar 19 09:49:20.378812 master-0 kubenswrapper[27819]: I0319 09:49:20.378791 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-config\") pod \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " Mar 19 09:49:20.385619 master-0 kubenswrapper[27819]: I0319 09:49:20.379097 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-nb\") pod \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " Mar 19 09:49:20.386017 master-0 kubenswrapper[27819]: I0319 09:49:20.385975 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-svc\") pod \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " Mar 19 09:49:20.386147 master-0 kubenswrapper[27819]: I0319 09:49:20.386126 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-sb\") pod \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " Mar 19 09:49:20.386250 master-0 kubenswrapper[27819]: I0319 09:49:20.386233 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98fb9\" (UniqueName: \"kubernetes.io/projected/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-kube-api-access-98fb9\") pod \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\" (UID: \"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d\") " Mar 19 09:49:20.395194 master-0 kubenswrapper[27819]: I0319 09:49:20.395073 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-kube-api-access-98fb9" (OuterVolumeSpecName: "kube-api-access-98fb9") pod "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" (UID: "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d"). InnerVolumeSpecName "kube-api-access-98fb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:20.417169 master-0 kubenswrapper[27819]: I0319 09:49:20.412144 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" (UID: "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:20.417169 master-0 kubenswrapper[27819]: I0319 09:49:20.413176 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" (UID: "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:20.438811 master-0 kubenswrapper[27819]: I0319 09:49:20.438741 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" (UID: "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:20.447665 master-0 kubenswrapper[27819]: I0319 09:49:20.446869 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-config" (OuterVolumeSpecName: "config") pod "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" (UID: "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:20.449432 master-0 kubenswrapper[27819]: I0319 09:49:20.449273 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" (UID: "c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:20.489607 master-0 kubenswrapper[27819]: I0319 09:49:20.489516 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:20.489607 master-0 kubenswrapper[27819]: I0319 09:49:20.489593 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:20.489607 master-0 kubenswrapper[27819]: I0319 09:49:20.489605 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:20.489607 master-0 kubenswrapper[27819]: I0319 09:49:20.489614 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:20.489607 master-0 kubenswrapper[27819]: I0319 09:49:20.489622 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:20.489968 master-0 kubenswrapper[27819]: I0319 09:49:20.489632 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98fb9\" (UniqueName: \"kubernetes.io/projected/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d-kube-api-access-98fb9\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:20.618184 master-0 kubenswrapper[27819]: I0319 09:49:20.618095 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:49:20.618843 master-0 kubenswrapper[27819]: E0319 09:49:20.618712 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" containerName="init" Mar 19 09:49:20.618843 master-0 kubenswrapper[27819]: I0319 09:49:20.618733 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" containerName="init" Mar 19 09:49:20.619103 master-0 kubenswrapper[27819]: I0319 09:49:20.619021 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" containerName="init" Mar 19 09:49:20.622569 master-0 kubenswrapper[27819]: I0319 09:49:20.621358 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.642051 master-0 kubenswrapper[27819]: I0319 09:49:20.642012 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 19 09:49:20.642274 master-0 kubenswrapper[27819]: I0319 09:49:20.642239 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-ae80b-default-internal-config-data" Mar 19 09:49:20.660104 master-0 kubenswrapper[27819]: I0319 09:49:20.660017 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:49:20.694090 master-0 kubenswrapper[27819]: I0319 09:49:20.693623 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-internal-tls-certs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.694090 master-0 kubenswrapper[27819]: I0319 09:49:20.693742 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-httpd-run\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.694090 master-0 kubenswrapper[27819]: I0319 09:49:20.693783 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.694090 master-0 kubenswrapper[27819]: I0319 09:49:20.694022 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-logs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.694090 master-0 kubenswrapper[27819]: I0319 09:49:20.694067 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5kh\" (UniqueName: \"kubernetes.io/projected/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-kube-api-access-wr5kh\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.694090 master-0 kubenswrapper[27819]: I0319 09:49:20.694116 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-combined-ca-bundle\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.695338 master-0 kubenswrapper[27819]: I0319 09:49:20.694142 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-scripts\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.695338 master-0 kubenswrapper[27819]: I0319 09:49:20.694183 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-config-data\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.723657 master-0 kubenswrapper[27819]: I0319 09:49:20.722738 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:20.723850 master-0 kubenswrapper[27819]: E0319 09:49:20.723673 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-ae80b-default-external-api-0" podUID="7fca1c9e-90e0-4d27-82a9-503dd075744b" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.796769 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5kh\" (UniqueName: \"kubernetes.io/projected/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-kube-api-access-wr5kh\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.796850 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-combined-ca-bundle\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.796930 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-scripts\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.796967 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-config-data\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.797106 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-internal-tls-certs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.797190 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-httpd-run\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.797255 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.797371 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-logs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.798655 master-0 kubenswrapper[27819]: I0319 09:49:20.798012 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-logs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.801114 master-0 kubenswrapper[27819]: I0319 09:49:20.800743 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-httpd-run\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.812514 master-0 kubenswrapper[27819]: I0319 09:49:20.803899 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-combined-ca-bundle\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.812514 master-0 kubenswrapper[27819]: I0319 09:49:20.806640 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-scripts\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.812514 master-0 kubenswrapper[27819]: I0319 09:49:20.807253 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:49:20.812514 master-0 kubenswrapper[27819]: I0319 09:49:20.807280 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2c67bbd9bf8089e21ab79a2f4175808b6bfebe7eda66c90638541094af90db59/globalmount\"" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.829262 master-0 kubenswrapper[27819]: I0319 09:49:20.828454 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-internal-tls-certs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.829981 master-0 kubenswrapper[27819]: I0319 09:49:20.829941 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-config-data\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.867207 master-0 kubenswrapper[27819]: I0319 09:49:20.867150 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5kh\" (UniqueName: \"kubernetes.io/projected/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-kube-api-access-wr5kh\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:20.880166 master-0 kubenswrapper[27819]: I0319 09:49:20.880098 27819 generic.go:334] "Generic (PLEG): container finished" podID="cdbc1696-8633-4090-93f5-84b5ea19bc9a" containerID="4dd2ab6395f5552881bbd2f281e57913e3797d55720f18a136da30f1cd9b2f42" exitCode=0 Mar 19 09:49:20.880603 master-0 kubenswrapper[27819]: I0319 09:49:20.880324 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-fx99f" event={"ID":"cdbc1696-8633-4090-93f5-84b5ea19bc9a","Type":"ContainerDied","Data":"4dd2ab6395f5552881bbd2f281e57913e3797d55720f18a136da30f1cd9b2f42"} Mar 19 09:49:20.885500 master-0 kubenswrapper[27819]: I0319 09:49:20.885426 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dc456b85-29flf" event={"ID":"c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d","Type":"ContainerDied","Data":"1bb7396c3cd13d3a8b06a0f398a5cb6ebd822920654b843e3124484a3bd7191d"} Mar 19 09:49:20.885645 master-0 kubenswrapper[27819]: I0319 09:49:20.885516 27819 scope.go:117] "RemoveContainer" containerID="9174f56b5f5ad32903a91ebdb7289a01d66f52c6d5adc6eae00bae15cf39ee2b" Mar 19 09:49:20.886958 master-0 kubenswrapper[27819]: I0319 09:49:20.886895 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dc456b85-29flf" Mar 19 09:49:20.899307 master-0 kubenswrapper[27819]: I0319 09:49:20.899241 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6mzj" event={"ID":"1abe1e88-82d4-488e-bd25-08cf29f5952e","Type":"ContainerStarted","Data":"c8ff4ddc05916d2532246f4d2e2c1acef87e215e94c25e6ad68fd0cf2be0d51a"} Mar 19 09:49:20.922633 master-0 kubenswrapper[27819]: I0319 09:49:20.920056 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4bmsm" event={"ID":"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0","Type":"ContainerStarted","Data":"b108245d4e4e64ade9bcd2737ecd345356a67a3af201f5d6c0ab09fe5b888925"} Mar 19 09:49:20.933967 master-0 kubenswrapper[27819]: I0319 09:49:20.933918 27819 generic.go:334] "Generic (PLEG): container finished" podID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerID="49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03" exitCode=0 Mar 19 09:49:20.934251 master-0 kubenswrapper[27819]: I0319 09:49:20.934227 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" event={"ID":"5934b50c-8a57-4df0-83a5-a6cf7279d7f8","Type":"ContainerDied","Data":"49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03"} Mar 19 09:49:20.940400 master-0 kubenswrapper[27819]: I0319 09:49:20.940360 27819 generic.go:334] "Generic (PLEG): container finished" podID="825e20ea-e29b-4aef-a7ab-3c3c92147e1f" containerID="d8bdd42c4174286c5fb6e3942a1b4408930a558c5d95acc4f667196df39cbcd2" exitCode=0 Mar 19 09:49:20.941926 master-0 kubenswrapper[27819]: I0319 09:49:20.941900 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-569f-account-create-update-6h6km" event={"ID":"825e20ea-e29b-4aef-a7ab-3c3c92147e1f","Type":"ContainerDied","Data":"d8bdd42c4174286c5fb6e3942a1b4408930a558c5d95acc4f667196df39cbcd2"} Mar 19 09:49:20.942245 master-0 kubenswrapper[27819]: I0319 09:49:20.942227 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:20.983549 master-0 kubenswrapper[27819]: I0319 09:49:20.983474 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:21.015057 master-0 kubenswrapper[27819]: I0319 09:49:21.006848 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-logs\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.015057 master-0 kubenswrapper[27819]: I0319 09:49:21.006910 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-public-tls-certs\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.015057 master-0 kubenswrapper[27819]: I0319 09:49:21.007011 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgtk6\" (UniqueName: \"kubernetes.io/projected/7fca1c9e-90e0-4d27-82a9-503dd075744b-kube-api-access-wgtk6\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.015057 master-0 kubenswrapper[27819]: I0319 09:49:21.007177 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-scripts\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.015057 master-0 kubenswrapper[27819]: I0319 09:49:21.007209 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-httpd-run\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.015057 master-0 kubenswrapper[27819]: I0319 09:49:21.007275 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-config-data\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.015057 master-0 kubenswrapper[27819]: I0319 09:49:21.007345 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-combined-ca-bundle\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.028109 master-0 kubenswrapper[27819]: I0319 09:49:21.017394 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dc456b85-29flf"] Mar 19 09:49:21.028109 master-0 kubenswrapper[27819]: I0319 09:49:21.017817 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:49:21.032812 master-0 kubenswrapper[27819]: I0319 09:49:21.032753 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-logs" (OuterVolumeSpecName: "logs") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:49:21.045434 master-0 kubenswrapper[27819]: I0319 09:49:21.045376 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dc456b85-29flf"] Mar 19 09:49:21.049104 master-0 kubenswrapper[27819]: I0319 09:49:21.049048 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:21.049248 master-0 kubenswrapper[27819]: I0319 09:49:21.049082 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-scripts" (OuterVolumeSpecName: "scripts") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:21.050158 master-0 kubenswrapper[27819]: I0319 09:49:21.050116 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fca1c9e-90e0-4d27-82a9-503dd075744b-kube-api-access-wgtk6" (OuterVolumeSpecName: "kube-api-access-wgtk6") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "kube-api-access-wgtk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:21.098018 master-0 kubenswrapper[27819]: I0319 09:49:21.097907 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:21.098489 master-0 kubenswrapper[27819]: I0319 09:49:21.098436 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-config-data" (OuterVolumeSpecName: "config-data") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:21.110444 master-0 kubenswrapper[27819]: I0319 09:49:21.110303 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:21.110444 master-0 kubenswrapper[27819]: I0319 09:49:21.110351 27819 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:21.110444 master-0 kubenswrapper[27819]: I0319 09:49:21.110363 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:21.110444 master-0 kubenswrapper[27819]: I0319 09:49:21.110372 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:21.110444 master-0 kubenswrapper[27819]: I0319 09:49:21.110384 27819 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7fca1c9e-90e0-4d27-82a9-503dd075744b-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:21.110444 master-0 kubenswrapper[27819]: I0319 09:49:21.110395 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fca1c9e-90e0-4d27-82a9-503dd075744b-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:21.110444 master-0 kubenswrapper[27819]: I0319 09:49:21.110408 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgtk6\" (UniqueName: \"kubernetes.io/projected/7fca1c9e-90e0-4d27-82a9-503dd075744b-kube-api-access-wgtk6\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:21.231990 master-0 kubenswrapper[27819]: I0319 09:49:21.231790 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4bmsm" podStartSLOduration=4.231760686 podStartE2EDuration="4.231760686s" podCreationTimestamp="2026-03-19 09:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:21.21915415 +0000 UTC m=+946.140731852" watchObservedRunningTime="2026-03-19 09:49:21.231760686 +0000 UTC m=+946.153338378" Mar 19 09:49:21.253797 master-0 kubenswrapper[27819]: I0319 09:49:21.253056 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:21.298444 master-0 kubenswrapper[27819]: I0319 09:49:21.298379 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d" path="/var/lib/kubelet/pods/c3a2b55d-11f5-4b86-ad8e-1b1bb6f6138d/volumes" Mar 19 09:49:21.318104 master-0 kubenswrapper[27819]: I0319 09:49:21.318030 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"7fca1c9e-90e0-4d27-82a9-503dd075744b\" (UID: \"7fca1c9e-90e0-4d27-82a9-503dd075744b\") " Mar 19 09:49:21.963144 master-0 kubenswrapper[27819]: I0319 09:49:21.962796 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" event={"ID":"5934b50c-8a57-4df0-83a5-a6cf7279d7f8","Type":"ContainerStarted","Data":"18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5"} Mar 19 09:49:21.963144 master-0 kubenswrapper[27819]: I0319 09:49:21.962934 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:21.990310 master-0 kubenswrapper[27819]: I0319 09:49:21.990212 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" podStartSLOduration=4.990185443 podStartE2EDuration="4.990185443s" podCreationTimestamp="2026-03-19 09:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:21.988598872 +0000 UTC m=+946.910176574" watchObservedRunningTime="2026-03-19 09:49:21.990185443 +0000 UTC m=+946.911763135" Mar 19 09:49:22.542049 master-0 kubenswrapper[27819]: I0319 09:49:22.542002 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:22.662410 master-0 kubenswrapper[27819]: I0319 09:49:22.662350 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqf7v\" (UniqueName: \"kubernetes.io/projected/cdbc1696-8633-4090-93f5-84b5ea19bc9a-kube-api-access-gqf7v\") pod \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " Mar 19 09:49:22.662789 master-0 kubenswrapper[27819]: I0319 09:49:22.662750 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbc1696-8633-4090-93f5-84b5ea19bc9a-operator-scripts\") pod \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\" (UID: \"cdbc1696-8633-4090-93f5-84b5ea19bc9a\") " Mar 19 09:49:22.663358 master-0 kubenswrapper[27819]: I0319 09:49:22.663316 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbc1696-8633-4090-93f5-84b5ea19bc9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdbc1696-8633-4090-93f5-84b5ea19bc9a" (UID: "cdbc1696-8633-4090-93f5-84b5ea19bc9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:22.663783 master-0 kubenswrapper[27819]: I0319 09:49:22.663759 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbc1696-8633-4090-93f5-84b5ea19bc9a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:22.666625 master-0 kubenswrapper[27819]: I0319 09:49:22.666523 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbc1696-8633-4090-93f5-84b5ea19bc9a-kube-api-access-gqf7v" (OuterVolumeSpecName: "kube-api-access-gqf7v") pod "cdbc1696-8633-4090-93f5-84b5ea19bc9a" (UID: "cdbc1696-8633-4090-93f5-84b5ea19bc9a"). InnerVolumeSpecName "kube-api-access-gqf7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:22.742630 master-0 kubenswrapper[27819]: I0319 09:49:22.742532 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:22.758668 master-0 kubenswrapper[27819]: I0319 09:49:22.758610 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:22.765769 master-0 kubenswrapper[27819]: I0319 09:49:22.765686 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqf7v\" (UniqueName: \"kubernetes.io/projected/cdbc1696-8633-4090-93f5-84b5ea19bc9a-kube-api-access-gqf7v\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:22.772628 master-0 kubenswrapper[27819]: I0319 09:49:22.772571 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81" (OuterVolumeSpecName: "glance") pod "7fca1c9e-90e0-4d27-82a9-503dd075744b" (UID: "7fca1c9e-90e0-4d27-82a9-503dd075744b"). InnerVolumeSpecName "pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 09:49:22.870505 master-0 kubenswrapper[27819]: I0319 09:49:22.870433 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-operator-scripts\") pod \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " Mar 19 09:49:22.870873 master-0 kubenswrapper[27819]: I0319 09:49:22.870800 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7mm7\" (UniqueName: \"kubernetes.io/projected/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-kube-api-access-q7mm7\") pod \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\" (UID: \"825e20ea-e29b-4aef-a7ab-3c3c92147e1f\") " Mar 19 09:49:22.871036 master-0 kubenswrapper[27819]: I0319 09:49:22.871010 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "825e20ea-e29b-4aef-a7ab-3c3c92147e1f" (UID: "825e20ea-e29b-4aef-a7ab-3c3c92147e1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:22.873306 master-0 kubenswrapper[27819]: I0319 09:49:22.871767 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:22.873306 master-0 kubenswrapper[27819]: I0319 09:49:22.871814 27819 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") on node \"master-0\" " Mar 19 09:49:22.874786 master-0 kubenswrapper[27819]: I0319 09:49:22.874655 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-kube-api-access-q7mm7" (OuterVolumeSpecName: "kube-api-access-q7mm7") pod "825e20ea-e29b-4aef-a7ab-3c3c92147e1f" (UID: "825e20ea-e29b-4aef-a7ab-3c3c92147e1f"). InnerVolumeSpecName "kube-api-access-q7mm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:22.916180 master-0 kubenswrapper[27819]: I0319 09:49:22.916084 27819 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 09:49:22.916431 master-0 kubenswrapper[27819]: I0319 09:49:22.916239 27819 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e" (UniqueName: "kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81") on node "master-0" Mar 19 09:49:22.982272 master-0 kubenswrapper[27819]: I0319 09:49:22.982217 27819 reconciler_common.go:293] "Volume detached for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:22.982272 master-0 kubenswrapper[27819]: I0319 09:49:22.982272 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7mm7\" (UniqueName: \"kubernetes.io/projected/825e20ea-e29b-4aef-a7ab-3c3c92147e1f-kube-api-access-q7mm7\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:22.985679 master-0 kubenswrapper[27819]: I0319 09:49:22.985640 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-fx99f" Mar 19 09:49:22.997139 master-0 kubenswrapper[27819]: I0319 09:49:22.995631 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-fx99f" event={"ID":"cdbc1696-8633-4090-93f5-84b5ea19bc9a","Type":"ContainerDied","Data":"5099aed382e9565b92d3ae6db17c3214164c57c5e0c610f77b5b3a4de3f136c0"} Mar 19 09:49:22.997139 master-0 kubenswrapper[27819]: I0319 09:49:22.995692 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5099aed382e9565b92d3ae6db17c3214164c57c5e0c610f77b5b3a4de3f136c0" Mar 19 09:49:23.000262 master-0 kubenswrapper[27819]: I0319 09:49:23.000214 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-569f-account-create-update-6h6km" Mar 19 09:49:23.001386 master-0 kubenswrapper[27819]: I0319 09:49:23.001336 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-569f-account-create-update-6h6km" event={"ID":"825e20ea-e29b-4aef-a7ab-3c3c92147e1f","Type":"ContainerDied","Data":"e4742fa69c839e90c8551c6e12e8ac0f5b11584abe94e0d61e8c4b10495949bb"} Mar 19 09:49:23.001386 master-0 kubenswrapper[27819]: I0319 09:49:23.001377 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4742fa69c839e90c8551c6e12e8ac0f5b11584abe94e0d61e8c4b10495949bb" Mar 19 09:49:23.007733 master-0 kubenswrapper[27819]: I0319 09:49:23.001394 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:23.104568 master-0 kubenswrapper[27819]: I0319 09:49:23.104318 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:23.137127 master-0 kubenswrapper[27819]: I0319 09:49:23.117606 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:23.140736 master-0 kubenswrapper[27819]: I0319 09:49:23.140058 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:23.256570 master-0 kubenswrapper[27819]: I0319 09:49:23.256180 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:23.260578 master-0 kubenswrapper[27819]: E0319 09:49:23.256822 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbc1696-8633-4090-93f5-84b5ea19bc9a" containerName="mariadb-database-create" Mar 19 09:49:23.260578 master-0 kubenswrapper[27819]: I0319 09:49:23.256848 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbc1696-8633-4090-93f5-84b5ea19bc9a" containerName="mariadb-database-create" Mar 19 09:49:23.260578 master-0 kubenswrapper[27819]: E0319 09:49:23.256859 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825e20ea-e29b-4aef-a7ab-3c3c92147e1f" containerName="mariadb-account-create-update" Mar 19 09:49:23.260578 master-0 kubenswrapper[27819]: I0319 09:49:23.256867 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="825e20ea-e29b-4aef-a7ab-3c3c92147e1f" containerName="mariadb-account-create-update" Mar 19 09:49:23.260578 master-0 kubenswrapper[27819]: I0319 09:49:23.257172 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="825e20ea-e29b-4aef-a7ab-3c3c92147e1f" containerName="mariadb-account-create-update" Mar 19 09:49:23.260578 master-0 kubenswrapper[27819]: I0319 09:49:23.257198 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbc1696-8633-4090-93f5-84b5ea19bc9a" containerName="mariadb-database-create" Mar 19 09:49:23.260578 master-0 kubenswrapper[27819]: I0319 09:49:23.258608 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.273571 master-0 kubenswrapper[27819]: I0319 09:49:23.264481 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 09:49:23.295564 master-0 kubenswrapper[27819]: I0319 09:49:23.276528 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-ae80b-default-external-config-data" Mar 19 09:49:23.321930 master-0 kubenswrapper[27819]: I0319 09:49:23.321077 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fca1c9e-90e0-4d27-82a9-503dd075744b" path="/var/lib/kubelet/pods/7fca1c9e-90e0-4d27-82a9-503dd075744b/volumes" Mar 19 09:49:23.321930 master-0 kubenswrapper[27819]: I0319 09:49:23.321690 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:23.433716 master-0 kubenswrapper[27819]: I0319 09:49:23.433299 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.433716 master-0 kubenswrapper[27819]: I0319 09:49:23.433374 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.433716 master-0 kubenswrapper[27819]: I0319 09:49:23.433440 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwdb\" (UniqueName: \"kubernetes.io/projected/2ccd264e-dca5-4707-9b98-868e25c16500-kube-api-access-vjwdb\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.433716 master-0 kubenswrapper[27819]: I0319 09:49:23.433511 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.433716 master-0 kubenswrapper[27819]: I0319 09:49:23.433597 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.433716 master-0 kubenswrapper[27819]: I0319 09:49:23.433645 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.434273 master-0 kubenswrapper[27819]: I0319 09:49:23.433750 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.434273 master-0 kubenswrapper[27819]: I0319 09:49:23.433803 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.540940 master-0 kubenswrapper[27819]: I0319 09:49:23.540882 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.541159 master-0 kubenswrapper[27819]: I0319 09:49:23.541075 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.541159 master-0 kubenswrapper[27819]: I0319 09:49:23.541107 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.541231 master-0 kubenswrapper[27819]: I0319 09:49:23.541180 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwdb\" (UniqueName: \"kubernetes.io/projected/2ccd264e-dca5-4707-9b98-868e25c16500-kube-api-access-vjwdb\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.541324 master-0 kubenswrapper[27819]: I0319 09:49:23.541296 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.541364 master-0 kubenswrapper[27819]: I0319 09:49:23.541356 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.543005 master-0 kubenswrapper[27819]: I0319 09:49:23.541398 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.543005 master-0 kubenswrapper[27819]: I0319 09:49:23.541506 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.543005 master-0 kubenswrapper[27819]: I0319 09:49:23.542741 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.543432 master-0 kubenswrapper[27819]: I0319 09:49:23.543410 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.544851 master-0 kubenswrapper[27819]: I0319 09:49:23.544825 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:49:23.544940 master-0 kubenswrapper[27819]: I0319 09:49:23.544853 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d8a587592d303f8470fc6f13326b5360e6df71aa3ac25c2d7cd8ffda26d20834/globalmount\"" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.545154 master-0 kubenswrapper[27819]: I0319 09:49:23.545123 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.547855 master-0 kubenswrapper[27819]: I0319 09:49:23.547815 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.547959 master-0 kubenswrapper[27819]: I0319 09:49:23.547938 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.551099 master-0 kubenswrapper[27819]: I0319 09:49:23.551048 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:23.564749 master-0 kubenswrapper[27819]: I0319 09:49:23.564213 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwdb\" (UniqueName: \"kubernetes.io/projected/2ccd264e-dca5-4707-9b98-868e25c16500-kube-api-access-vjwdb\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:24.906950 master-0 kubenswrapper[27819]: I0319 09:49:24.906886 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:25.148012 master-0 kubenswrapper[27819]: I0319 09:49:25.147938 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:26.048953 master-0 kubenswrapper[27819]: I0319 09:49:26.048800 27819 generic.go:334] "Generic (PLEG): container finished" podID="6ba4cae5-c391-4b24-8e04-5a97edc47fe2" containerID="bc4f09ab2712fd65e4041d58288c309b337c057e27b8efa27bd516d2a8320730" exitCode=0 Mar 19 09:49:26.048953 master-0 kubenswrapper[27819]: I0319 09:49:26.048875 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d4bnt" event={"ID":"6ba4cae5-c391-4b24-8e04-5a97edc47fe2","Type":"ContainerDied","Data":"bc4f09ab2712fd65e4041d58288c309b337c057e27b8efa27bd516d2a8320730"} Mar 19 09:49:26.051079 master-0 kubenswrapper[27819]: I0319 09:49:26.051019 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6mzj" event={"ID":"1abe1e88-82d4-488e-bd25-08cf29f5952e","Type":"ContainerStarted","Data":"16efc548ff718c4742f6ef96f68919148a2e5375bb900f86044285f55d8e809e"} Mar 19 09:49:26.202790 master-0 kubenswrapper[27819]: I0319 09:49:26.202691 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-s6mzj" podStartSLOduration=3.661426283 podStartE2EDuration="9.202660994s" podCreationTimestamp="2026-03-19 09:49:17 +0000 UTC" firstStartedPulling="2026-03-19 09:49:19.923026184 +0000 UTC m=+944.844603877" lastFinishedPulling="2026-03-19 09:49:25.464260896 +0000 UTC m=+950.385838588" observedRunningTime="2026-03-19 09:49:26.19710917 +0000 UTC m=+951.118686862" watchObservedRunningTime="2026-03-19 09:49:26.202660994 +0000 UTC m=+951.124238686" Mar 19 09:49:26.265671 master-0 kubenswrapper[27819]: I0319 09:49:26.265626 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:49:26.809567 master-0 kubenswrapper[27819]: I0319 09:49:26.809484 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:49:27.078627 master-0 kubenswrapper[27819]: I0319 09:49:27.078445 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad","Type":"ContainerStarted","Data":"ce484e028154d421d4730fff341621ec63228c22e5e146b1debaf7b1e9d89357"} Mar 19 09:49:27.078627 master-0 kubenswrapper[27819]: I0319 09:49:27.078576 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad","Type":"ContainerStarted","Data":"5ddcbb0158ecb1999ebf474040f3019a06b8271050b80f7461441fe3bef06f55"} Mar 19 09:49:27.080029 master-0 kubenswrapper[27819]: I0319 09:49:27.079996 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"2ccd264e-dca5-4707-9b98-868e25c16500","Type":"ContainerStarted","Data":"9f9e2bad0af9d6208d654f034bb2c834a9fd6e7bac4205389ab88e706d3c2c1f"} Mar 19 09:49:27.582857 master-0 kubenswrapper[27819]: I0319 09:49:27.582800 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:27.637414 master-0 kubenswrapper[27819]: I0319 09:49:27.636963 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9wvp\" (UniqueName: \"kubernetes.io/projected/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-kube-api-access-g9wvp\") pod \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " Mar 19 09:49:27.637414 master-0 kubenswrapper[27819]: I0319 09:49:27.637073 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-scripts\") pod \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " Mar 19 09:49:27.637414 master-0 kubenswrapper[27819]: I0319 09:49:27.637226 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-fernet-keys\") pod \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " Mar 19 09:49:27.637414 master-0 kubenswrapper[27819]: I0319 09:49:27.637286 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-credential-keys\") pod \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " Mar 19 09:49:27.637414 master-0 kubenswrapper[27819]: I0319 09:49:27.637379 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-config-data\") pod \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " Mar 19 09:49:27.637414 master-0 kubenswrapper[27819]: I0319 09:49:27.637399 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-combined-ca-bundle\") pod \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\" (UID: \"6ba4cae5-c391-4b24-8e04-5a97edc47fe2\") " Mar 19 09:49:27.641663 master-0 kubenswrapper[27819]: I0319 09:49:27.641524 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6ba4cae5-c391-4b24-8e04-5a97edc47fe2" (UID: "6ba4cae5-c391-4b24-8e04-5a97edc47fe2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:27.641663 master-0 kubenswrapper[27819]: I0319 09:49:27.641572 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-kube-api-access-g9wvp" (OuterVolumeSpecName: "kube-api-access-g9wvp") pod "6ba4cae5-c391-4b24-8e04-5a97edc47fe2" (UID: "6ba4cae5-c391-4b24-8e04-5a97edc47fe2"). InnerVolumeSpecName "kube-api-access-g9wvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:27.650572 master-0 kubenswrapper[27819]: I0319 09:49:27.650457 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-scripts" (OuterVolumeSpecName: "scripts") pod "6ba4cae5-c391-4b24-8e04-5a97edc47fe2" (UID: "6ba4cae5-c391-4b24-8e04-5a97edc47fe2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:27.661562 master-0 kubenswrapper[27819]: I0319 09:49:27.661249 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6ba4cae5-c391-4b24-8e04-5a97edc47fe2" (UID: "6ba4cae5-c391-4b24-8e04-5a97edc47fe2"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:27.697015 master-0 kubenswrapper[27819]: I0319 09:49:27.696916 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ba4cae5-c391-4b24-8e04-5a97edc47fe2" (UID: "6ba4cae5-c391-4b24-8e04-5a97edc47fe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:27.706081 master-0 kubenswrapper[27819]: I0319 09:49:27.705921 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-config-data" (OuterVolumeSpecName: "config-data") pod "6ba4cae5-c391-4b24-8e04-5a97edc47fe2" (UID: "6ba4cae5-c391-4b24-8e04-5a97edc47fe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:27.743962 master-0 kubenswrapper[27819]: I0319 09:49:27.743816 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9wvp\" (UniqueName: \"kubernetes.io/projected/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-kube-api-access-g9wvp\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:27.743962 master-0 kubenswrapper[27819]: I0319 09:49:27.743873 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:27.743962 master-0 kubenswrapper[27819]: I0319 09:49:27.743889 27819 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:27.743962 master-0 kubenswrapper[27819]: I0319 09:49:27.743904 27819 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:27.743962 master-0 kubenswrapper[27819]: I0319 09:49:27.743916 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:27.743962 master-0 kubenswrapper[27819]: I0319 09:49:27.743929 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba4cae5-c391-4b24-8e04-5a97edc47fe2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:28.103660 master-0 kubenswrapper[27819]: I0319 09:49:28.101452 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad","Type":"ContainerStarted","Data":"9333f4d16522e3ef2410a1d2292df7fa579e47ba4d510e97031b4124411d5eb1"} Mar 19 09:49:28.116219 master-0 kubenswrapper[27819]: I0319 09:49:28.116148 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"2ccd264e-dca5-4707-9b98-868e25c16500","Type":"ContainerStarted","Data":"cd060a3700040059fc15ffe780dadd9e26169e1931b5f1db2bbdf671a5b7c41f"} Mar 19 09:49:28.117865 master-0 kubenswrapper[27819]: I0319 09:49:28.117830 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d4bnt" event={"ID":"6ba4cae5-c391-4b24-8e04-5a97edc47fe2","Type":"ContainerDied","Data":"a1a65d45cfabf2d9be2ac65763ae11ba1b14cf2bfc83f7e9fef88bcb474083a8"} Mar 19 09:49:28.117947 master-0 kubenswrapper[27819]: I0319 09:49:28.117864 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a65d45cfabf2d9be2ac65763ae11ba1b14cf2bfc83f7e9fef88bcb474083a8" Mar 19 09:49:28.117947 master-0 kubenswrapper[27819]: I0319 09:49:28.117921 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d4bnt" Mar 19 09:49:28.476751 master-0 kubenswrapper[27819]: I0319 09:49:28.476423 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:49:29.003055 master-0 kubenswrapper[27819]: I0319 09:49:29.002950 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ae80b-default-internal-api-0" podStartSLOduration=9.002931262 podStartE2EDuration="9.002931262s" podCreationTimestamp="2026-03-19 09:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:28.9982095 +0000 UTC m=+953.919787202" watchObservedRunningTime="2026-03-19 09:49:29.002931262 +0000 UTC m=+953.924508954" Mar 19 09:49:29.140884 master-0 kubenswrapper[27819]: I0319 09:49:29.140798 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"2ccd264e-dca5-4707-9b98-868e25c16500","Type":"ContainerStarted","Data":"d13f6ea63b1ae6ffc84de07f836582898750d3654cc788d090679a0507be935e"} Mar 19 09:49:29.683844 master-0 kubenswrapper[27819]: I0319 09:49:29.683756 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ae80b-default-external-api-0" podStartSLOduration=7.683727357 podStartE2EDuration="7.683727357s" podCreationTimestamp="2026-03-19 09:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:29.682206799 +0000 UTC m=+954.603784501" watchObservedRunningTime="2026-03-19 09:49:29.683727357 +0000 UTC m=+954.605305049" Mar 19 09:49:29.748753 master-0 kubenswrapper[27819]: I0319 09:49:29.744759 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-ht7bl"] Mar 19 09:49:29.748753 master-0 kubenswrapper[27819]: I0319 09:49:29.745217 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="dnsmasq-dns" containerID="cri-o://3ee9584c1f4889f02c39223d39a759e17fb0d166c173d3751dc22c602c027422" gracePeriod=10 Mar 19 09:49:29.801255 master-0 kubenswrapper[27819]: I0319 09:49:29.801192 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d4bnt"] Mar 19 09:49:29.846978 master-0 kubenswrapper[27819]: I0319 09:49:29.846877 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d4bnt"] Mar 19 09:49:29.872058 master-0 kubenswrapper[27819]: I0319 09:49:29.871991 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-975s6"] Mar 19 09:49:29.873035 master-0 kubenswrapper[27819]: E0319 09:49:29.873014 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba4cae5-c391-4b24-8e04-5a97edc47fe2" containerName="keystone-bootstrap" Mar 19 09:49:29.873149 master-0 kubenswrapper[27819]: I0319 09:49:29.873137 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba4cae5-c391-4b24-8e04-5a97edc47fe2" containerName="keystone-bootstrap" Mar 19 09:49:29.873617 master-0 kubenswrapper[27819]: I0319 09:49:29.873587 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba4cae5-c391-4b24-8e04-5a97edc47fe2" containerName="keystone-bootstrap" Mar 19 09:49:29.875210 master-0 kubenswrapper[27819]: I0319 09:49:29.875166 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:29.879598 master-0 kubenswrapper[27819]: I0319 09:49:29.878169 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 19 09:49:29.879598 master-0 kubenswrapper[27819]: I0319 09:49:29.878356 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 19 09:49:29.879598 master-0 kubenswrapper[27819]: I0319 09:49:29.878448 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 09:49:29.892107 master-0 kubenswrapper[27819]: I0319 09:49:29.892008 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-975s6"] Mar 19 09:49:29.905487 master-0 kubenswrapper[27819]: I0319 09:49:29.905398 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-52q7d"] Mar 19 09:49:29.907281 master-0 kubenswrapper[27819]: I0319 09:49:29.907247 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:29.912339 master-0 kubenswrapper[27819]: I0319 09:49:29.912227 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 19 09:49:29.912558 master-0 kubenswrapper[27819]: I0319 09:49:29.912516 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 19 09:49:29.912839 master-0 kubenswrapper[27819]: I0319 09:49:29.912816 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 19 09:49:29.916774 master-0 kubenswrapper[27819]: I0319 09:49:29.916718 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-52q7d"] Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.019724 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-fernet-keys\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.019820 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-scripts\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.019852 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23d034a2-6b7a-41f4-904d-f333f1ca8605-etc-podinfo\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.019969 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-combined-ca-bundle\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020023 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data-merged\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020047 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znjsc\" (UniqueName: \"kubernetes.io/projected/23d034a2-6b7a-41f4-904d-f333f1ca8605-kube-api-access-znjsc\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020132 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-config-data\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020155 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-credential-keys\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020205 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc4w\" (UniqueName: \"kubernetes.io/projected/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-kube-api-access-knc4w\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020232 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-scripts\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020254 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.021952 master-0 kubenswrapper[27819]: I0319 09:49:30.020286 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-combined-ca-bundle\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.122792 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc4w\" (UniqueName: \"kubernetes.io/projected/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-kube-api-access-knc4w\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.122846 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-scripts\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.122874 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.122904 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-combined-ca-bundle\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.122968 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-fernet-keys\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.122987 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-scripts\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.123010 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23d034a2-6b7a-41f4-904d-f333f1ca8605-etc-podinfo\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.123041 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-combined-ca-bundle\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.123074 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data-merged\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.123090 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znjsc\" (UniqueName: \"kubernetes.io/projected/23d034a2-6b7a-41f4-904d-f333f1ca8605-kube-api-access-znjsc\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.123154 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-config-data\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.123653 master-0 kubenswrapper[27819]: I0319 09:49:30.123173 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-credential-keys\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.125080 master-0 kubenswrapper[27819]: I0319 09:49:30.125047 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data-merged\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.127489 master-0 kubenswrapper[27819]: I0319 09:49:30.127449 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-credential-keys\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.129637 master-0 kubenswrapper[27819]: I0319 09:49:30.129522 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-scripts\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.135696 master-0 kubenswrapper[27819]: I0319 09:49:30.131256 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.135696 master-0 kubenswrapper[27819]: I0319 09:49:30.133158 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-scripts\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.137820 master-0 kubenswrapper[27819]: I0319 09:49:30.136968 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-combined-ca-bundle\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.143932 master-0 kubenswrapper[27819]: I0319 09:49:30.141461 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc4w\" (UniqueName: \"kubernetes.io/projected/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-kube-api-access-knc4w\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.153762 master-0 kubenswrapper[27819]: I0319 09:49:30.143931 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-combined-ca-bundle\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.153762 master-0 kubenswrapper[27819]: I0319 09:49:30.145215 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-fernet-keys\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.153762 master-0 kubenswrapper[27819]: I0319 09:49:30.145695 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znjsc\" (UniqueName: \"kubernetes.io/projected/23d034a2-6b7a-41f4-904d-f333f1ca8605-kube-api-access-znjsc\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.153762 master-0 kubenswrapper[27819]: I0319 09:49:30.145815 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-config-data\") pod \"keystone-bootstrap-52q7d\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:30.153762 master-0 kubenswrapper[27819]: I0319 09:49:30.146761 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23d034a2-6b7a-41f4-904d-f333f1ca8605-etc-podinfo\") pod \"ironic-db-sync-975s6\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.219382 master-0 kubenswrapper[27819]: I0319 09:49:30.219251 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-975s6" Mar 19 09:49:30.265772 master-0 kubenswrapper[27819]: I0319 09:49:30.265720 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:31.294313 master-0 kubenswrapper[27819]: I0319 09:49:31.294260 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba4cae5-c391-4b24-8e04-5a97edc47fe2" path="/var/lib/kubelet/pods/6ba4cae5-c391-4b24-8e04-5a97edc47fe2/volumes" Mar 19 09:49:33.107053 master-0 kubenswrapper[27819]: I0319 09:49:33.106954 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:33.107053 master-0 kubenswrapper[27819]: I0319 09:49:33.107026 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:33.141104 master-0 kubenswrapper[27819]: I0319 09:49:33.141049 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:33.159308 master-0 kubenswrapper[27819]: I0319 09:49:33.159268 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:33.193949 master-0 kubenswrapper[27819]: I0319 09:49:33.193886 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:33.193949 master-0 kubenswrapper[27819]: I0319 09:49:33.193958 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:33.328215 master-0 kubenswrapper[27819]: I0319 09:49:33.328119 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.180:5353: connect: connection refused" Mar 19 09:49:35.148924 master-0 kubenswrapper[27819]: I0319 09:49:35.148852 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:35.148924 master-0 kubenswrapper[27819]: I0319 09:49:35.148925 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:35.177712 master-0 kubenswrapper[27819]: I0319 09:49:35.177573 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:35.201211 master-0 kubenswrapper[27819]: I0319 09:49:35.201143 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:35.230752 master-0 kubenswrapper[27819]: I0319 09:49:35.230646 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:35.230945 master-0 kubenswrapper[27819]: I0319 09:49:35.230861 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:35.363047 master-0 kubenswrapper[27819]: I0319 09:49:35.362880 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:35.363047 master-0 kubenswrapper[27819]: I0319 09:49:35.363016 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:49:35.365987 master-0 kubenswrapper[27819]: I0319 09:49:35.365938 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:49:38.336602 master-0 kubenswrapper[27819]: I0319 09:49:38.334819 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.180:5353: connect: connection refused" Mar 19 09:49:39.087879 master-0 kubenswrapper[27819]: I0319 09:49:39.087794 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:39.088121 master-0 kubenswrapper[27819]: I0319 09:49:39.087948 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:49:39.091119 master-0 kubenswrapper[27819]: I0319 09:49:39.090520 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:49:41.346665 master-0 kubenswrapper[27819]: I0319 09:49:41.346440 27819 generic.go:334] "Generic (PLEG): container finished" podID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerID="3ee9584c1f4889f02c39223d39a759e17fb0d166c173d3751dc22c602c027422" exitCode=0 Mar 19 09:49:41.346665 master-0 kubenswrapper[27819]: I0319 09:49:41.346521 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" event={"ID":"e8f2abdd-185a-42c6-9cb8-1a905b907791","Type":"ContainerDied","Data":"3ee9584c1f4889f02c39223d39a759e17fb0d166c173d3751dc22c602c027422"} Mar 19 09:49:41.625295 master-0 kubenswrapper[27819]: I0319 09:49:41.625039 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:49:41.746319 master-0 kubenswrapper[27819]: I0319 09:49:41.746248 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-config\") pod \"e8f2abdd-185a-42c6-9cb8-1a905b907791\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " Mar 19 09:49:41.746319 master-0 kubenswrapper[27819]: I0319 09:49:41.746311 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-sb\") pod \"e8f2abdd-185a-42c6-9cb8-1a905b907791\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " Mar 19 09:49:41.746616 master-0 kubenswrapper[27819]: I0319 09:49:41.746479 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-nb\") pod \"e8f2abdd-185a-42c6-9cb8-1a905b907791\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " Mar 19 09:49:41.746665 master-0 kubenswrapper[27819]: I0319 09:49:41.746646 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-dns-svc\") pod \"e8f2abdd-185a-42c6-9cb8-1a905b907791\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " Mar 19 09:49:41.746701 master-0 kubenswrapper[27819]: I0319 09:49:41.746682 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4vl8\" (UniqueName: \"kubernetes.io/projected/e8f2abdd-185a-42c6-9cb8-1a905b907791-kube-api-access-p4vl8\") pod \"e8f2abdd-185a-42c6-9cb8-1a905b907791\" (UID: \"e8f2abdd-185a-42c6-9cb8-1a905b907791\") " Mar 19 09:49:41.751234 master-0 kubenswrapper[27819]: I0319 09:49:41.751169 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f2abdd-185a-42c6-9cb8-1a905b907791-kube-api-access-p4vl8" (OuterVolumeSpecName: "kube-api-access-p4vl8") pod "e8f2abdd-185a-42c6-9cb8-1a905b907791" (UID: "e8f2abdd-185a-42c6-9cb8-1a905b907791"). InnerVolumeSpecName "kube-api-access-p4vl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:41.803730 master-0 kubenswrapper[27819]: I0319 09:49:41.803673 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e8f2abdd-185a-42c6-9cb8-1a905b907791" (UID: "e8f2abdd-185a-42c6-9cb8-1a905b907791"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:41.809030 master-0 kubenswrapper[27819]: I0319 09:49:41.808823 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8f2abdd-185a-42c6-9cb8-1a905b907791" (UID: "e8f2abdd-185a-42c6-9cb8-1a905b907791"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:41.817692 master-0 kubenswrapper[27819]: I0319 09:49:41.814088 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-config" (OuterVolumeSpecName: "config") pod "e8f2abdd-185a-42c6-9cb8-1a905b907791" (UID: "e8f2abdd-185a-42c6-9cb8-1a905b907791"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:41.817692 master-0 kubenswrapper[27819]: I0319 09:49:41.817212 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8f2abdd-185a-42c6-9cb8-1a905b907791" (UID: "e8f2abdd-185a-42c6-9cb8-1a905b907791"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:41.850965 master-0 kubenswrapper[27819]: I0319 09:49:41.850855 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:41.851660 master-0 kubenswrapper[27819]: I0319 09:49:41.850965 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4vl8\" (UniqueName: \"kubernetes.io/projected/e8f2abdd-185a-42c6-9cb8-1a905b907791-kube-api-access-p4vl8\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:41.851756 master-0 kubenswrapper[27819]: I0319 09:49:41.851671 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:41.851756 master-0 kubenswrapper[27819]: I0319 09:49:41.851695 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:41.851756 master-0 kubenswrapper[27819]: I0319 09:49:41.851710 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8f2abdd-185a-42c6-9cb8-1a905b907791-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:41.973002 master-0 kubenswrapper[27819]: I0319 09:49:41.972889 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-975s6"] Mar 19 09:49:41.982153 master-0 kubenswrapper[27819]: W0319 09:49:41.982049 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23d034a2_6b7a_41f4_904d_f333f1ca8605.slice/crio-4922d0b4201601abe6162ea9d144612590b5be1fec90ccaf4724de7233de8452 WatchSource:0}: Error finding container 4922d0b4201601abe6162ea9d144612590b5be1fec90ccaf4724de7233de8452: Status 404 returned error can't find the container with id 4922d0b4201601abe6162ea9d144612590b5be1fec90ccaf4724de7233de8452 Mar 19 09:49:42.082751 master-0 kubenswrapper[27819]: I0319 09:49:42.077666 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-52q7d"] Mar 19 09:49:42.090119 master-0 kubenswrapper[27819]: I0319 09:49:42.090058 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 09:49:42.363167 master-0 kubenswrapper[27819]: I0319 09:49:42.363088 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-52q7d" event={"ID":"b9db4b54-f904-4d5e-95e6-93e2cee01d6b","Type":"ContainerStarted","Data":"63c1e554ea276bab320ed3645bc7a4e6a2d6214271cd866718b05d01add4f026"} Mar 19 09:49:42.363167 master-0 kubenswrapper[27819]: I0319 09:49:42.363149 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-52q7d" event={"ID":"b9db4b54-f904-4d5e-95e6-93e2cee01d6b","Type":"ContainerStarted","Data":"db0213df4af0f354ece54251bf2f30da9e28380819cfa06eaa0fc3c9c30b8ac4"} Mar 19 09:49:42.370067 master-0 kubenswrapper[27819]: I0319 09:49:42.370003 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-975s6" event={"ID":"23d034a2-6b7a-41f4-904d-f333f1ca8605","Type":"ContainerStarted","Data":"4922d0b4201601abe6162ea9d144612590b5be1fec90ccaf4724de7233de8452"} Mar 19 09:49:42.373016 master-0 kubenswrapper[27819]: I0319 09:49:42.372944 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" event={"ID":"e8f2abdd-185a-42c6-9cb8-1a905b907791","Type":"ContainerDied","Data":"f77689176406a5a868ecd60d1907ea702aaf58bc228846d2193d7ef133b71f44"} Mar 19 09:49:42.373099 master-0 kubenswrapper[27819]: I0319 09:49:42.373034 27819 scope.go:117] "RemoveContainer" containerID="3ee9584c1f4889f02c39223d39a759e17fb0d166c173d3751dc22c602c027422" Mar 19 09:49:42.373452 master-0 kubenswrapper[27819]: I0319 09:49:42.373301 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8b865dc-ht7bl" Mar 19 09:49:42.374644 master-0 kubenswrapper[27819]: I0319 09:49:42.374601 27819 generic.go:334] "Generic (PLEG): container finished" podID="1abe1e88-82d4-488e-bd25-08cf29f5952e" containerID="16efc548ff718c4742f6ef96f68919148a2e5375bb900f86044285f55d8e809e" exitCode=0 Mar 19 09:49:42.374716 master-0 kubenswrapper[27819]: I0319 09:49:42.374686 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6mzj" event={"ID":"1abe1e88-82d4-488e-bd25-08cf29f5952e","Type":"ContainerDied","Data":"16efc548ff718c4742f6ef96f68919148a2e5375bb900f86044285f55d8e809e"} Mar 19 09:49:42.379228 master-0 kubenswrapper[27819]: I0319 09:49:42.377807 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-db-sync-fvnxz" event={"ID":"5ef7ab15-9976-4989-b837-55f0b27ee661","Type":"ContainerStarted","Data":"a29063fdc487022c455c11e3e132c93389620ff41f4f30e98cee22be95288ab1"} Mar 19 09:49:42.411154 master-0 kubenswrapper[27819]: I0319 09:49:42.411077 27819 scope.go:117] "RemoveContainer" containerID="6452eb0b322bf114f26e7e95257237fb6a0f19b13becec0729ceb5f70035d18b" Mar 19 09:49:42.422564 master-0 kubenswrapper[27819]: I0319 09:49:42.422472 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-52q7d" podStartSLOduration=13.422450884 podStartE2EDuration="13.422450884s" podCreationTimestamp="2026-03-19 09:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:42.391886122 +0000 UTC m=+967.313463834" watchObservedRunningTime="2026-03-19 09:49:42.422450884 +0000 UTC m=+967.344028576" Mar 19 09:49:42.491186 master-0 kubenswrapper[27819]: I0319 09:49:42.488966 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-db-sync-fvnxz" podStartSLOduration=3.134120163 podStartE2EDuration="25.488940837s" podCreationTimestamp="2026-03-19 09:49:17 +0000 UTC" firstStartedPulling="2026-03-19 09:49:19.189439681 +0000 UTC m=+944.111017373" lastFinishedPulling="2026-03-19 09:49:41.544260355 +0000 UTC m=+966.465838047" observedRunningTime="2026-03-19 09:49:42.463496507 +0000 UTC m=+967.385074209" watchObservedRunningTime="2026-03-19 09:49:42.488940837 +0000 UTC m=+967.410518529" Mar 19 09:49:42.503713 master-0 kubenswrapper[27819]: I0319 09:49:42.502584 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-ht7bl"] Mar 19 09:49:42.514613 master-0 kubenswrapper[27819]: I0319 09:49:42.512234 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf8b865dc-ht7bl"] Mar 19 09:49:43.303236 master-0 kubenswrapper[27819]: I0319 09:49:43.303156 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" path="/var/lib/kubelet/pods/e8f2abdd-185a-42c6-9cb8-1a905b907791/volumes" Mar 19 09:49:43.877493 master-0 kubenswrapper[27819]: I0319 09:49:43.877404 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:43.929068 master-0 kubenswrapper[27819]: I0319 09:49:43.929011 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk2tt\" (UniqueName: \"kubernetes.io/projected/1abe1e88-82d4-488e-bd25-08cf29f5952e-kube-api-access-bk2tt\") pod \"1abe1e88-82d4-488e-bd25-08cf29f5952e\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " Mar 19 09:49:43.929400 master-0 kubenswrapper[27819]: I0319 09:49:43.929382 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-config-data\") pod \"1abe1e88-82d4-488e-bd25-08cf29f5952e\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " Mar 19 09:49:43.929496 master-0 kubenswrapper[27819]: I0319 09:49:43.929483 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-scripts\") pod \"1abe1e88-82d4-488e-bd25-08cf29f5952e\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " Mar 19 09:49:43.929691 master-0 kubenswrapper[27819]: I0319 09:49:43.929676 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-combined-ca-bundle\") pod \"1abe1e88-82d4-488e-bd25-08cf29f5952e\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " Mar 19 09:49:43.929834 master-0 kubenswrapper[27819]: I0319 09:49:43.929821 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abe1e88-82d4-488e-bd25-08cf29f5952e-logs\") pod \"1abe1e88-82d4-488e-bd25-08cf29f5952e\" (UID: \"1abe1e88-82d4-488e-bd25-08cf29f5952e\") " Mar 19 09:49:43.930205 master-0 kubenswrapper[27819]: I0319 09:49:43.930176 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1abe1e88-82d4-488e-bd25-08cf29f5952e-logs" (OuterVolumeSpecName: "logs") pod "1abe1e88-82d4-488e-bd25-08cf29f5952e" (UID: "1abe1e88-82d4-488e-bd25-08cf29f5952e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:49:43.930653 master-0 kubenswrapper[27819]: I0319 09:49:43.930635 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1abe1e88-82d4-488e-bd25-08cf29f5952e-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:43.932576 master-0 kubenswrapper[27819]: I0319 09:49:43.932507 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abe1e88-82d4-488e-bd25-08cf29f5952e-kube-api-access-bk2tt" (OuterVolumeSpecName: "kube-api-access-bk2tt") pod "1abe1e88-82d4-488e-bd25-08cf29f5952e" (UID: "1abe1e88-82d4-488e-bd25-08cf29f5952e"). InnerVolumeSpecName "kube-api-access-bk2tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:43.933136 master-0 kubenswrapper[27819]: I0319 09:49:43.933041 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-scripts" (OuterVolumeSpecName: "scripts") pod "1abe1e88-82d4-488e-bd25-08cf29f5952e" (UID: "1abe1e88-82d4-488e-bd25-08cf29f5952e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:43.963826 master-0 kubenswrapper[27819]: I0319 09:49:43.963684 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1abe1e88-82d4-488e-bd25-08cf29f5952e" (UID: "1abe1e88-82d4-488e-bd25-08cf29f5952e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:43.971814 master-0 kubenswrapper[27819]: I0319 09:49:43.971661 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-config-data" (OuterVolumeSpecName: "config-data") pod "1abe1e88-82d4-488e-bd25-08cf29f5952e" (UID: "1abe1e88-82d4-488e-bd25-08cf29f5952e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:44.033151 master-0 kubenswrapper[27819]: I0319 09:49:44.033104 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:44.033245 master-0 kubenswrapper[27819]: I0319 09:49:44.033156 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk2tt\" (UniqueName: \"kubernetes.io/projected/1abe1e88-82d4-488e-bd25-08cf29f5952e-kube-api-access-bk2tt\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:44.033245 master-0 kubenswrapper[27819]: I0319 09:49:44.033170 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:44.033245 master-0 kubenswrapper[27819]: I0319 09:49:44.033178 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1abe1e88-82d4-488e-bd25-08cf29f5952e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:44.423726 master-0 kubenswrapper[27819]: I0319 09:49:44.423670 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-s6mzj" event={"ID":"1abe1e88-82d4-488e-bd25-08cf29f5952e","Type":"ContainerDied","Data":"c8ff4ddc05916d2532246f4d2e2c1acef87e215e94c25e6ad68fd0cf2be0d51a"} Mar 19 09:49:44.423726 master-0 kubenswrapper[27819]: I0319 09:49:44.423729 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ff4ddc05916d2532246f4d2e2c1acef87e215e94c25e6ad68fd0cf2be0d51a" Mar 19 09:49:44.424475 master-0 kubenswrapper[27819]: I0319 09:49:44.423829 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-s6mzj" Mar 19 09:49:44.712620 master-0 kubenswrapper[27819]: I0319 09:49:44.712484 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5f4b5cb8b6-kmwr8"] Mar 19 09:49:44.713064 master-0 kubenswrapper[27819]: E0319 09:49:44.713033 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="init" Mar 19 09:49:44.713064 master-0 kubenswrapper[27819]: I0319 09:49:44.713058 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="init" Mar 19 09:49:44.713145 master-0 kubenswrapper[27819]: E0319 09:49:44.713098 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abe1e88-82d4-488e-bd25-08cf29f5952e" containerName="placement-db-sync" Mar 19 09:49:44.713145 master-0 kubenswrapper[27819]: I0319 09:49:44.713106 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abe1e88-82d4-488e-bd25-08cf29f5952e" containerName="placement-db-sync" Mar 19 09:49:44.713145 master-0 kubenswrapper[27819]: E0319 09:49:44.713140 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="dnsmasq-dns" Mar 19 09:49:44.713234 master-0 kubenswrapper[27819]: I0319 09:49:44.713150 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="dnsmasq-dns" Mar 19 09:49:44.713468 master-0 kubenswrapper[27819]: I0319 09:49:44.713441 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f2abdd-185a-42c6-9cb8-1a905b907791" containerName="dnsmasq-dns" Mar 19 09:49:44.713517 master-0 kubenswrapper[27819]: I0319 09:49:44.713481 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abe1e88-82d4-488e-bd25-08cf29f5952e" containerName="placement-db-sync" Mar 19 09:49:44.715269 master-0 kubenswrapper[27819]: I0319 09:49:44.714825 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.723565 master-0 kubenswrapper[27819]: I0319 09:49:44.720373 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 19 09:49:44.723565 master-0 kubenswrapper[27819]: I0319 09:49:44.720376 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 19 09:49:44.723565 master-0 kubenswrapper[27819]: I0319 09:49:44.720573 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 19 09:49:44.723565 master-0 kubenswrapper[27819]: I0319 09:49:44.720579 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 19 09:49:44.725147 master-0 kubenswrapper[27819]: I0319 09:49:44.723938 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f4b5cb8b6-kmwr8"] Mar 19 09:49:44.748966 master-0 kubenswrapper[27819]: I0319 09:49:44.748897 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9213d9a0-94b1-431b-8116-8fafc2a636cf-logs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.749452 master-0 kubenswrapper[27819]: I0319 09:49:44.749430 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-combined-ca-bundle\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.749621 master-0 kubenswrapper[27819]: I0319 09:49:44.749604 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-scripts\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.749724 master-0 kubenswrapper[27819]: I0319 09:49:44.749709 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-public-tls-certs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.749879 master-0 kubenswrapper[27819]: I0319 09:49:44.749866 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w277l\" (UniqueName: \"kubernetes.io/projected/9213d9a0-94b1-431b-8116-8fafc2a636cf-kube-api-access-w277l\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.749977 master-0 kubenswrapper[27819]: I0319 09:49:44.749964 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-config-data\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.751741 master-0 kubenswrapper[27819]: I0319 09:49:44.750234 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-internal-tls-certs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.854043 master-0 kubenswrapper[27819]: I0319 09:49:44.853954 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9213d9a0-94b1-431b-8116-8fafc2a636cf-logs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.855915 master-0 kubenswrapper[27819]: I0319 09:49:44.854055 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-combined-ca-bundle\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.855915 master-0 kubenswrapper[27819]: I0319 09:49:44.854120 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-scripts\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.855915 master-0 kubenswrapper[27819]: I0319 09:49:44.854151 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-public-tls-certs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.855915 master-0 kubenswrapper[27819]: I0319 09:49:44.854229 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w277l\" (UniqueName: \"kubernetes.io/projected/9213d9a0-94b1-431b-8116-8fafc2a636cf-kube-api-access-w277l\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.855915 master-0 kubenswrapper[27819]: I0319 09:49:44.854369 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-config-data\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.855915 master-0 kubenswrapper[27819]: I0319 09:49:44.854448 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-internal-tls-certs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.856855 master-0 kubenswrapper[27819]: I0319 09:49:44.856826 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9213d9a0-94b1-431b-8116-8fafc2a636cf-logs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.859167 master-0 kubenswrapper[27819]: I0319 09:49:44.859122 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-combined-ca-bundle\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.859458 master-0 kubenswrapper[27819]: I0319 09:49:44.859419 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-config-data\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.860257 master-0 kubenswrapper[27819]: I0319 09:49:44.860215 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-internal-tls-certs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.871171 master-0 kubenswrapper[27819]: I0319 09:49:44.862563 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-public-tls-certs\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.871875 master-0 kubenswrapper[27819]: I0319 09:49:44.871839 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-scripts\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:44.877902 master-0 kubenswrapper[27819]: I0319 09:49:44.877850 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w277l\" (UniqueName: \"kubernetes.io/projected/9213d9a0-94b1-431b-8116-8fafc2a636cf-kube-api-access-w277l\") pod \"placement-5f4b5cb8b6-kmwr8\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:45.062750 master-0 kubenswrapper[27819]: I0319 09:49:45.062616 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:49.496389 master-0 kubenswrapper[27819]: I0319 09:49:49.496317 27819 generic.go:334] "Generic (PLEG): container finished" podID="b9db4b54-f904-4d5e-95e6-93e2cee01d6b" containerID="63c1e554ea276bab320ed3645bc7a4e6a2d6214271cd866718b05d01add4f026" exitCode=0 Mar 19 09:49:49.497342 master-0 kubenswrapper[27819]: I0319 09:49:49.496387 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-52q7d" event={"ID":"b9db4b54-f904-4d5e-95e6-93e2cee01d6b","Type":"ContainerDied","Data":"63c1e554ea276bab320ed3645bc7a4e6a2d6214271cd866718b05d01add4f026"} Mar 19 09:49:49.965367 master-0 kubenswrapper[27819]: W0319 09:49:49.964594 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9213d9a0_94b1_431b_8116_8fafc2a636cf.slice/crio-5591e40d63764b02908dc07f6a0c881d65a4d3ed39ed32786684efc98a7d6884 WatchSource:0}: Error finding container 5591e40d63764b02908dc07f6a0c881d65a4d3ed39ed32786684efc98a7d6884: Status 404 returned error can't find the container with id 5591e40d63764b02908dc07f6a0c881d65a4d3ed39ed32786684efc98a7d6884 Mar 19 09:49:49.965367 master-0 kubenswrapper[27819]: I0319 09:49:49.965008 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5f4b5cb8b6-kmwr8"] Mar 19 09:49:50.509654 master-0 kubenswrapper[27819]: I0319 09:49:50.509597 27819 generic.go:334] "Generic (PLEG): container finished" podID="77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" containerID="b108245d4e4e64ade9bcd2737ecd345356a67a3af201f5d6c0ab09fe5b888925" exitCode=0 Mar 19 09:49:50.510802 master-0 kubenswrapper[27819]: I0319 09:49:50.509659 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4bmsm" event={"ID":"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0","Type":"ContainerDied","Data":"b108245d4e4e64ade9bcd2737ecd345356a67a3af201f5d6c0ab09fe5b888925"} Mar 19 09:49:50.514872 master-0 kubenswrapper[27819]: I0319 09:49:50.514772 27819 generic.go:334] "Generic (PLEG): container finished" podID="5ef7ab15-9976-4989-b837-55f0b27ee661" containerID="a29063fdc487022c455c11e3e132c93389620ff41f4f30e98cee22be95288ab1" exitCode=0 Mar 19 09:49:50.514872 master-0 kubenswrapper[27819]: I0319 09:49:50.514847 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-db-sync-fvnxz" event={"ID":"5ef7ab15-9976-4989-b837-55f0b27ee661","Type":"ContainerDied","Data":"a29063fdc487022c455c11e3e132c93389620ff41f4f30e98cee22be95288ab1"} Mar 19 09:49:50.518825 master-0 kubenswrapper[27819]: I0319 09:49:50.518691 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-975s6" event={"ID":"23d034a2-6b7a-41f4-904d-f333f1ca8605","Type":"ContainerStarted","Data":"76ee1f2703359f9ccf0890f11aaf8273188d893651abce2bf83630ba2a84a232"} Mar 19 09:49:50.522044 master-0 kubenswrapper[27819]: I0319 09:49:50.521852 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f4b5cb8b6-kmwr8" event={"ID":"9213d9a0-94b1-431b-8116-8fafc2a636cf","Type":"ContainerStarted","Data":"ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b"} Mar 19 09:49:50.522044 master-0 kubenswrapper[27819]: I0319 09:49:50.521897 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f4b5cb8b6-kmwr8" event={"ID":"9213d9a0-94b1-431b-8116-8fafc2a636cf","Type":"ContainerStarted","Data":"5591e40d63764b02908dc07f6a0c881d65a4d3ed39ed32786684efc98a7d6884"} Mar 19 09:49:50.848897 master-0 kubenswrapper[27819]: I0319 09:49:50.848857 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:50.976384 master-0 kubenswrapper[27819]: I0319 09:49:50.976301 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-config-data\") pod \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " Mar 19 09:49:50.976657 master-0 kubenswrapper[27819]: I0319 09:49:50.976423 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-fernet-keys\") pod \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " Mar 19 09:49:50.976657 master-0 kubenswrapper[27819]: I0319 09:49:50.976464 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-scripts\") pod \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " Mar 19 09:49:50.976657 master-0 kubenswrapper[27819]: I0319 09:49:50.976515 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knc4w\" (UniqueName: \"kubernetes.io/projected/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-kube-api-access-knc4w\") pod \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " Mar 19 09:49:50.976657 master-0 kubenswrapper[27819]: I0319 09:49:50.976583 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-credential-keys\") pod \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " Mar 19 09:49:50.976851 master-0 kubenswrapper[27819]: I0319 09:49:50.976705 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-combined-ca-bundle\") pod \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\" (UID: \"b9db4b54-f904-4d5e-95e6-93e2cee01d6b\") " Mar 19 09:49:50.979789 master-0 kubenswrapper[27819]: I0319 09:49:50.979709 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-scripts" (OuterVolumeSpecName: "scripts") pod "b9db4b54-f904-4d5e-95e6-93e2cee01d6b" (UID: "b9db4b54-f904-4d5e-95e6-93e2cee01d6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:50.980385 master-0 kubenswrapper[27819]: I0319 09:49:50.980341 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-kube-api-access-knc4w" (OuterVolumeSpecName: "kube-api-access-knc4w") pod "b9db4b54-f904-4d5e-95e6-93e2cee01d6b" (UID: "b9db4b54-f904-4d5e-95e6-93e2cee01d6b"). InnerVolumeSpecName "kube-api-access-knc4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:50.981065 master-0 kubenswrapper[27819]: I0319 09:49:50.980995 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b9db4b54-f904-4d5e-95e6-93e2cee01d6b" (UID: "b9db4b54-f904-4d5e-95e6-93e2cee01d6b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:50.981381 master-0 kubenswrapper[27819]: I0319 09:49:50.981336 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b9db4b54-f904-4d5e-95e6-93e2cee01d6b" (UID: "b9db4b54-f904-4d5e-95e6-93e2cee01d6b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:51.003518 master-0 kubenswrapper[27819]: I0319 09:49:51.003467 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9db4b54-f904-4d5e-95e6-93e2cee01d6b" (UID: "b9db4b54-f904-4d5e-95e6-93e2cee01d6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:51.005671 master-0 kubenswrapper[27819]: I0319 09:49:51.005623 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-config-data" (OuterVolumeSpecName: "config-data") pod "b9db4b54-f904-4d5e-95e6-93e2cee01d6b" (UID: "b9db4b54-f904-4d5e-95e6-93e2cee01d6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:51.079218 master-0 kubenswrapper[27819]: I0319 09:49:51.079153 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:51.079218 master-0 kubenswrapper[27819]: I0319 09:49:51.079197 27819 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:51.079218 master-0 kubenswrapper[27819]: I0319 09:49:51.079206 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:51.079218 master-0 kubenswrapper[27819]: I0319 09:49:51.079216 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knc4w\" (UniqueName: \"kubernetes.io/projected/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-kube-api-access-knc4w\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:51.079218 master-0 kubenswrapper[27819]: I0319 09:49:51.079228 27819 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:51.079607 master-0 kubenswrapper[27819]: I0319 09:49:51.079236 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9db4b54-f904-4d5e-95e6-93e2cee01d6b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:51.298976 master-0 kubenswrapper[27819]: I0319 09:49:51.298915 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6799b89bd8-q5hf4"] Mar 19 09:49:51.299444 master-0 kubenswrapper[27819]: E0319 09:49:51.299419 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9db4b54-f904-4d5e-95e6-93e2cee01d6b" containerName="keystone-bootstrap" Mar 19 09:49:51.299487 master-0 kubenswrapper[27819]: I0319 09:49:51.299444 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9db4b54-f904-4d5e-95e6-93e2cee01d6b" containerName="keystone-bootstrap" Mar 19 09:49:51.299813 master-0 kubenswrapper[27819]: I0319 09:49:51.299789 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9db4b54-f904-4d5e-95e6-93e2cee01d6b" containerName="keystone-bootstrap" Mar 19 09:49:51.300764 master-0 kubenswrapper[27819]: I0319 09:49:51.300730 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.308192 master-0 kubenswrapper[27819]: I0319 09:49:51.308130 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6799b89bd8-q5hf4"] Mar 19 09:49:51.311460 master-0 kubenswrapper[27819]: I0319 09:49:51.311419 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 19 09:49:51.311776 master-0 kubenswrapper[27819]: I0319 09:49:51.311738 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 19 09:49:51.392442 master-0 kubenswrapper[27819]: I0319 09:49:51.392370 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-internal-tls-certs\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.392721 master-0 kubenswrapper[27819]: I0319 09:49:51.392705 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-public-tls-certs\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.393035 master-0 kubenswrapper[27819]: I0319 09:49:51.393018 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-scripts\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.393134 master-0 kubenswrapper[27819]: I0319 09:49:51.393120 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-config-data\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.393282 master-0 kubenswrapper[27819]: I0319 09:49:51.393266 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-credential-keys\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.393403 master-0 kubenswrapper[27819]: I0319 09:49:51.393389 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhsbl\" (UniqueName: \"kubernetes.io/projected/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-kube-api-access-qhsbl\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.393493 master-0 kubenswrapper[27819]: I0319 09:49:51.393479 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-combined-ca-bundle\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.393617 master-0 kubenswrapper[27819]: I0319 09:49:51.393603 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-fernet-keys\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.494579 master-0 kubenswrapper[27819]: I0319 09:49:51.494495 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-config-data\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.494848 master-0 kubenswrapper[27819]: I0319 09:49:51.494613 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-credential-keys\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.494848 master-0 kubenswrapper[27819]: I0319 09:49:51.494671 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhsbl\" (UniqueName: \"kubernetes.io/projected/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-kube-api-access-qhsbl\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.495396 master-0 kubenswrapper[27819]: I0319 09:49:51.495361 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-combined-ca-bundle\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.495680 master-0 kubenswrapper[27819]: I0319 09:49:51.495666 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-fernet-keys\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.496220 master-0 kubenswrapper[27819]: I0319 09:49:51.496204 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-internal-tls-certs\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.496367 master-0 kubenswrapper[27819]: I0319 09:49:51.496354 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-public-tls-certs\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.496528 master-0 kubenswrapper[27819]: I0319 09:49:51.496513 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-scripts\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.498612 master-0 kubenswrapper[27819]: I0319 09:49:51.498580 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-credential-keys\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.498682 master-0 kubenswrapper[27819]: I0319 09:49:51.498611 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-internal-tls-certs\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.499762 master-0 kubenswrapper[27819]: I0319 09:49:51.499155 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-combined-ca-bundle\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.500291 master-0 kubenswrapper[27819]: I0319 09:49:51.499913 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-scripts\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.501374 master-0 kubenswrapper[27819]: I0319 09:49:51.501329 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-public-tls-certs\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.503535 master-0 kubenswrapper[27819]: I0319 09:49:51.503490 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-config-data\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.505676 master-0 kubenswrapper[27819]: I0319 09:49:51.505358 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-fernet-keys\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.515789 master-0 kubenswrapper[27819]: I0319 09:49:51.515689 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhsbl\" (UniqueName: \"kubernetes.io/projected/f8c22ca8-2158-49d0-8d4a-cd190cb7bb01-kube-api-access-qhsbl\") pod \"keystone-6799b89bd8-q5hf4\" (UID: \"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01\") " pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.536477 master-0 kubenswrapper[27819]: I0319 09:49:51.535882 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f4b5cb8b6-kmwr8" event={"ID":"9213d9a0-94b1-431b-8116-8fafc2a636cf","Type":"ContainerStarted","Data":"475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de"} Mar 19 09:49:51.536477 master-0 kubenswrapper[27819]: I0319 09:49:51.536363 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:51.536708 master-0 kubenswrapper[27819]: I0319 09:49:51.536559 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:49:51.538796 master-0 kubenswrapper[27819]: I0319 09:49:51.538727 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-52q7d" event={"ID":"b9db4b54-f904-4d5e-95e6-93e2cee01d6b","Type":"ContainerDied","Data":"db0213df4af0f354ece54251bf2f30da9e28380819cfa06eaa0fc3c9c30b8ac4"} Mar 19 09:49:51.538796 master-0 kubenswrapper[27819]: I0319 09:49:51.538776 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db0213df4af0f354ece54251bf2f30da9e28380819cfa06eaa0fc3c9c30b8ac4" Mar 19 09:49:51.538923 master-0 kubenswrapper[27819]: I0319 09:49:51.538799 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-52q7d" Mar 19 09:49:51.543474 master-0 kubenswrapper[27819]: I0319 09:49:51.542834 27819 generic.go:334] "Generic (PLEG): container finished" podID="23d034a2-6b7a-41f4-904d-f333f1ca8605" containerID="76ee1f2703359f9ccf0890f11aaf8273188d893651abce2bf83630ba2a84a232" exitCode=0 Mar 19 09:49:51.543474 master-0 kubenswrapper[27819]: I0319 09:49:51.542915 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-975s6" event={"ID":"23d034a2-6b7a-41f4-904d-f333f1ca8605","Type":"ContainerDied","Data":"76ee1f2703359f9ccf0890f11aaf8273188d893651abce2bf83630ba2a84a232"} Mar 19 09:49:51.583503 master-0 kubenswrapper[27819]: I0319 09:49:51.574144 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5f4b5cb8b6-kmwr8" podStartSLOduration=7.574120732 podStartE2EDuration="7.574120732s" podCreationTimestamp="2026-03-19 09:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:51.561907365 +0000 UTC m=+976.483485057" watchObservedRunningTime="2026-03-19 09:49:51.574120732 +0000 UTC m=+976.495698424" Mar 19 09:49:51.639162 master-0 kubenswrapper[27819]: I0319 09:49:51.638810 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:51.958085 master-0 kubenswrapper[27819]: I0319 09:49:51.957723 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:52.127303 master-0 kubenswrapper[27819]: I0319 09:49:52.127217 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-combined-ca-bundle\") pod \"5ef7ab15-9976-4989-b837-55f0b27ee661\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " Mar 19 09:49:52.127303 master-0 kubenswrapper[27819]: I0319 09:49:52.127311 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ef7ab15-9976-4989-b837-55f0b27ee661-etc-machine-id\") pod \"5ef7ab15-9976-4989-b837-55f0b27ee661\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " Mar 19 09:49:52.127624 master-0 kubenswrapper[27819]: I0319 09:49:52.127356 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-db-sync-config-data\") pod \"5ef7ab15-9976-4989-b837-55f0b27ee661\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " Mar 19 09:49:52.127624 master-0 kubenswrapper[27819]: I0319 09:49:52.127457 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-scripts\") pod \"5ef7ab15-9976-4989-b837-55f0b27ee661\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " Mar 19 09:49:52.127624 master-0 kubenswrapper[27819]: I0319 09:49:52.127578 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ef7ab15-9976-4989-b837-55f0b27ee661-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5ef7ab15-9976-4989-b837-55f0b27ee661" (UID: "5ef7ab15-9976-4989-b837-55f0b27ee661"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:49:52.127766 master-0 kubenswrapper[27819]: I0319 09:49:52.127706 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-config-data\") pod \"5ef7ab15-9976-4989-b837-55f0b27ee661\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " Mar 19 09:49:52.127874 master-0 kubenswrapper[27819]: I0319 09:49:52.127842 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgnlc\" (UniqueName: \"kubernetes.io/projected/5ef7ab15-9976-4989-b837-55f0b27ee661-kube-api-access-bgnlc\") pod \"5ef7ab15-9976-4989-b837-55f0b27ee661\" (UID: \"5ef7ab15-9976-4989-b837-55f0b27ee661\") " Mar 19 09:49:52.128435 master-0 kubenswrapper[27819]: I0319 09:49:52.128371 27819 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ef7ab15-9976-4989-b837-55f0b27ee661-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.134207 master-0 kubenswrapper[27819]: I0319 09:49:52.133884 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5ef7ab15-9976-4989-b837-55f0b27ee661" (UID: "5ef7ab15-9976-4989-b837-55f0b27ee661"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:52.134207 master-0 kubenswrapper[27819]: I0319 09:49:52.133940 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-scripts" (OuterVolumeSpecName: "scripts") pod "5ef7ab15-9976-4989-b837-55f0b27ee661" (UID: "5ef7ab15-9976-4989-b837-55f0b27ee661"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:52.134207 master-0 kubenswrapper[27819]: I0319 09:49:52.133991 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ef7ab15-9976-4989-b837-55f0b27ee661-kube-api-access-bgnlc" (OuterVolumeSpecName: "kube-api-access-bgnlc") pod "5ef7ab15-9976-4989-b837-55f0b27ee661" (UID: "5ef7ab15-9976-4989-b837-55f0b27ee661"). InnerVolumeSpecName "kube-api-access-bgnlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:52.159519 master-0 kubenswrapper[27819]: I0319 09:49:52.159420 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5ef7ab15-9976-4989-b837-55f0b27ee661" (UID: "5ef7ab15-9976-4989-b837-55f0b27ee661"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:52.184825 master-0 kubenswrapper[27819]: I0319 09:49:52.184401 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6799b89bd8-q5hf4"] Mar 19 09:49:52.184825 master-0 kubenswrapper[27819]: I0319 09:49:52.184757 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-config-data" (OuterVolumeSpecName: "config-data") pod "5ef7ab15-9976-4989-b837-55f0b27ee661" (UID: "5ef7ab15-9976-4989-b837-55f0b27ee661"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:52.229935 master-0 kubenswrapper[27819]: I0319 09:49:52.229657 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.229935 master-0 kubenswrapper[27819]: I0319 09:49:52.229716 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgnlc\" (UniqueName: \"kubernetes.io/projected/5ef7ab15-9976-4989-b837-55f0b27ee661-kube-api-access-bgnlc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.229935 master-0 kubenswrapper[27819]: I0319 09:49:52.229729 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.229935 master-0 kubenswrapper[27819]: I0319 09:49:52.229740 27819 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.229935 master-0 kubenswrapper[27819]: I0319 09:49:52.229749 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ef7ab15-9976-4989-b837-55f0b27ee661-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.280739 master-0 kubenswrapper[27819]: I0319 09:49:52.280679 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:52.336931 master-0 kubenswrapper[27819]: I0319 09:49:52.332167 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdrj\" (UniqueName: \"kubernetes.io/projected/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-kube-api-access-4cdrj\") pod \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " Mar 19 09:49:52.336931 master-0 kubenswrapper[27819]: I0319 09:49:52.332306 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-combined-ca-bundle\") pod \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " Mar 19 09:49:52.336931 master-0 kubenswrapper[27819]: I0319 09:49:52.336798 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-kube-api-access-4cdrj" (OuterVolumeSpecName: "kube-api-access-4cdrj") pod "77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" (UID: "77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0"). InnerVolumeSpecName "kube-api-access-4cdrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:52.371372 master-0 kubenswrapper[27819]: I0319 09:49:52.371047 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" (UID: "77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:52.434407 master-0 kubenswrapper[27819]: I0319 09:49:52.433294 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-config\") pod \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\" (UID: \"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0\") " Mar 19 09:49:52.434407 master-0 kubenswrapper[27819]: I0319 09:49:52.433740 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdrj\" (UniqueName: \"kubernetes.io/projected/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-kube-api-access-4cdrj\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.434407 master-0 kubenswrapper[27819]: I0319 09:49:52.433763 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.457757 master-0 kubenswrapper[27819]: I0319 09:49:52.457685 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-config" (OuterVolumeSpecName: "config") pod "77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" (UID: "77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:49:52.535815 master-0 kubenswrapper[27819]: I0319 09:49:52.535743 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:52.554138 master-0 kubenswrapper[27819]: I0319 09:49:52.554072 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6799b89bd8-q5hf4" event={"ID":"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01","Type":"ContainerStarted","Data":"6583406b66b85f9a6b1c1a741ae222191036eebae7af5c7d8f024c469496e817"} Mar 19 09:49:52.554138 master-0 kubenswrapper[27819]: I0319 09:49:52.554128 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6799b89bd8-q5hf4" event={"ID":"f8c22ca8-2158-49d0-8d4a-cd190cb7bb01","Type":"ContainerStarted","Data":"c70588edc21a8b2e61186b5bc2f8c95af3c8e17a3df4474495a41fade35dbf3f"} Mar 19 09:49:52.555377 master-0 kubenswrapper[27819]: I0319 09:49:52.555287 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:49:52.557363 master-0 kubenswrapper[27819]: I0319 09:49:52.557249 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4bmsm" event={"ID":"77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0","Type":"ContainerDied","Data":"bc60a964c0d0a77cfa1cdc6cf507a6dda779ece7a767966a32c80fc51d95303d"} Mar 19 09:49:52.557363 master-0 kubenswrapper[27819]: I0319 09:49:52.557277 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc60a964c0d0a77cfa1cdc6cf507a6dda779ece7a767966a32c80fc51d95303d" Mar 19 09:49:52.557363 master-0 kubenswrapper[27819]: I0319 09:49:52.557320 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4bmsm" Mar 19 09:49:52.563432 master-0 kubenswrapper[27819]: I0319 09:49:52.563203 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-db-sync-fvnxz" event={"ID":"5ef7ab15-9976-4989-b837-55f0b27ee661","Type":"ContainerDied","Data":"34ea14cbeb5ab13b8f49723bc477859d2466fe1aa9f9d5724b97ece954bb70d6"} Mar 19 09:49:52.563432 master-0 kubenswrapper[27819]: I0319 09:49:52.563255 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ea14cbeb5ab13b8f49723bc477859d2466fe1aa9f9d5724b97ece954bb70d6" Mar 19 09:49:52.563432 master-0 kubenswrapper[27819]: I0319 09:49:52.563223 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-db-sync-fvnxz" Mar 19 09:49:52.584527 master-0 kubenswrapper[27819]: I0319 09:49:52.584398 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-975s6" event={"ID":"23d034a2-6b7a-41f4-904d-f333f1ca8605","Type":"ContainerStarted","Data":"b5c48c7fbdde94ca974b5f96e8471c98a7ccfd64b116eff8a658403787596e23"} Mar 19 09:49:52.612829 master-0 kubenswrapper[27819]: I0319 09:49:52.611590 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6799b89bd8-q5hf4" podStartSLOduration=1.611527645 podStartE2EDuration="1.611527645s" podCreationTimestamp="2026-03-19 09:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:52.587948625 +0000 UTC m=+977.509526337" watchObservedRunningTime="2026-03-19 09:49:52.611527645 +0000 UTC m=+977.533105337" Mar 19 09:49:52.634919 master-0 kubenswrapper[27819]: I0319 09:49:52.634781 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-975s6" podStartSLOduration=15.470090975 podStartE2EDuration="23.634755706s" podCreationTimestamp="2026-03-19 09:49:29 +0000 UTC" firstStartedPulling="2026-03-19 09:49:41.989048937 +0000 UTC m=+966.910626629" lastFinishedPulling="2026-03-19 09:49:50.153713668 +0000 UTC m=+975.075291360" observedRunningTime="2026-03-19 09:49:52.627192421 +0000 UTC m=+977.548770113" watchObservedRunningTime="2026-03-19 09:49:52.634755706 +0000 UTC m=+977.556333398" Mar 19 09:49:52.864976 master-0 kubenswrapper[27819]: I0319 09:49:52.860930 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79985679c-k6hv5"] Mar 19 09:49:52.864976 master-0 kubenswrapper[27819]: E0319 09:49:52.861614 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ef7ab15-9976-4989-b837-55f0b27ee661" containerName="cinder-255d6-db-sync" Mar 19 09:49:52.864976 master-0 kubenswrapper[27819]: I0319 09:49:52.861638 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ef7ab15-9976-4989-b837-55f0b27ee661" containerName="cinder-255d6-db-sync" Mar 19 09:49:52.864976 master-0 kubenswrapper[27819]: E0319 09:49:52.861660 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" containerName="neutron-db-sync" Mar 19 09:49:52.864976 master-0 kubenswrapper[27819]: I0319 09:49:52.861668 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" containerName="neutron-db-sync" Mar 19 09:49:52.887768 master-0 kubenswrapper[27819]: I0319 09:49:52.874823 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ef7ab15-9976-4989-b837-55f0b27ee661" containerName="cinder-255d6-db-sync" Mar 19 09:49:52.887768 master-0 kubenswrapper[27819]: I0319 09:49:52.875054 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" containerName="neutron-db-sync" Mar 19 09:49:52.891930 master-0 kubenswrapper[27819]: I0319 09:49:52.888474 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:52.905626 master-0 kubenswrapper[27819]: I0319 09:49:52.904262 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79985679c-k6hv5"] Mar 19 09:49:52.963389 master-0 kubenswrapper[27819]: I0319 09:49:52.963338 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99hz\" (UniqueName: \"kubernetes.io/projected/e2ba055e-e43e-4f37-8288-75d4396f6055-kube-api-access-j99hz\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:52.968430 master-0 kubenswrapper[27819]: I0319 09:49:52.964527 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-nb\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:52.969200 master-0 kubenswrapper[27819]: I0319 09:49:52.969131 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-sb\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:52.969474 master-0 kubenswrapper[27819]: I0319 09:49:52.969407 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-config\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:52.970027 master-0 kubenswrapper[27819]: I0319 09:49:52.970005 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-swift-storage-0\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:52.980785 master-0 kubenswrapper[27819]: I0319 09:49:52.974902 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-svc\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.018932 master-0 kubenswrapper[27819]: I0319 09:49:53.018870 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:49:53.032989 master-0 kubenswrapper[27819]: I0319 09:49:53.020901 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.047925 master-0 kubenswrapper[27819]: I0319 09:49:53.045322 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-scheduler-config-data" Mar 19 09:49:53.047925 master-0 kubenswrapper[27819]: I0319 09:49:53.045519 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-config-data" Mar 19 09:49:53.047925 master-0 kubenswrapper[27819]: I0319 09:49:53.045674 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-scripts" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.077663 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99hz\" (UniqueName: \"kubernetes.io/projected/e2ba055e-e43e-4f37-8288-75d4396f6055-kube-api-access-j99hz\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.077728 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-nb\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.077805 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-sb\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.077830 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-config\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.077877 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-swift-storage-0\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.077913 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-svc\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.078722 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-svc\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.082141 master-0 kubenswrapper[27819]: I0319 09:49:53.079437 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-sb\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.121619 master-0 kubenswrapper[27819]: I0319 09:49:53.117453 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-config\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.129692 master-0 kubenswrapper[27819]: I0319 09:49:53.124688 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99hz\" (UniqueName: \"kubernetes.io/projected/e2ba055e-e43e-4f37-8288-75d4396f6055-kube-api-access-j99hz\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.129692 master-0 kubenswrapper[27819]: I0319 09:49:53.128413 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-swift-storage-0\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.139644 master-0 kubenswrapper[27819]: I0319 09:49:53.133476 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:49:53.139644 master-0 kubenswrapper[27819]: I0319 09:49:53.134393 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-nb\") pod \"dnsmasq-dns-79985679c-k6hv5\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.185570 master-0 kubenswrapper[27819]: I0319 09:49:53.184608 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:49:53.238759 master-0 kubenswrapper[27819]: I0319 09:49:53.217059 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.238759 master-0 kubenswrapper[27819]: I0319 09:49:53.221992 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-volume-lvm-iscsi-config-data" Mar 19 09:49:53.243356 master-0 kubenswrapper[27819]: I0319 09:49:53.241710 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-combined-ca-bundle\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.243356 master-0 kubenswrapper[27819]: I0319 09:49:53.241803 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2510c5df-c68c-4f52-b572-6367fa71fd77-etc-machine-id\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.243356 master-0 kubenswrapper[27819]: I0319 09:49:53.241854 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data-custom\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.243356 master-0 kubenswrapper[27819]: I0319 09:49:53.241929 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjv76\" (UniqueName: \"kubernetes.io/projected/2510c5df-c68c-4f52-b572-6367fa71fd77-kube-api-access-tjv76\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.243356 master-0 kubenswrapper[27819]: I0319 09:49:53.241959 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.243356 master-0 kubenswrapper[27819]: I0319 09:49:53.241989 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-scripts\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.243850 master-0 kubenswrapper[27819]: I0319 09:49:53.243804 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:49:53.314239 master-0 kubenswrapper[27819]: I0319 09:49:53.314182 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:49:53.318805 master-0 kubenswrapper[27819]: I0319 09:49:53.317654 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79985679c-k6hv5"] Mar 19 09:49:53.318805 master-0 kubenswrapper[27819]: I0319 09:49:53.318702 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:53.320074 master-0 kubenswrapper[27819]: I0319 09:49:53.319273 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.322116 master-0 kubenswrapper[27819]: I0319 09:49:53.321705 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-backup-config-data" Mar 19 09:49:53.323820 master-0 kubenswrapper[27819]: I0319 09:49:53.323792 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:49:53.343928 master-0 kubenswrapper[27819]: I0319 09:49:53.343884 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-795d6cd54b-mpqdp"] Mar 19 09:49:53.346831 master-0 kubenswrapper[27819]: I0319 09:49:53.346798 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.346965 master-0 kubenswrapper[27819]: I0319 09:49:53.346844 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-iscsi\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.346965 master-0 kubenswrapper[27819]: I0319 09:49:53.346872 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2510c5df-c68c-4f52-b572-6367fa71fd77-etc-machine-id\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.346965 master-0 kubenswrapper[27819]: I0319 09:49:53.346895 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-iscsi\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.346965 master-0 kubenswrapper[27819]: I0319 09:49:53.346919 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-sys\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.346965 master-0 kubenswrapper[27819]: I0319 09:49:53.346945 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data-custom\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.346979 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-lib-modules\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347007 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-nvme\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347030 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data-custom\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347052 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-lib-modules\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347074 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-nvme\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347115 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347153 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-lib-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347173 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-brick\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347208 master-0 kubenswrapper[27819]: I0319 09:49:53.347193 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347220 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjv76\" (UniqueName: \"kubernetes.io/projected/2510c5df-c68c-4f52-b572-6367fa71fd77-kube-api-access-tjv76\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347243 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-brick\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347283 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347307 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347356 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-scripts\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347389 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhgn4\" (UniqueName: \"kubernetes.io/projected/3b665618-cc45-40c7-88c5-563951c4ea1f-kube-api-access-bhgn4\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347419 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-machine-id\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347466 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-combined-ca-bundle\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347494 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-machine-id\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347526 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data-custom\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347578 master-0 kubenswrapper[27819]: I0319 09:49:53.347567 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-dev\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347591 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-dev\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347610 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-scripts\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347638 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-scripts\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347671 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqg6\" (UniqueName: \"kubernetes.io/projected/a65cfcee-1397-46cc-af85-67a07c3e325c-kube-api-access-nnqg6\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347696 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-sys\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347717 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-run\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347754 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-lib-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347776 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-run\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347804 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-combined-ca-bundle\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.347914 master-0 kubenswrapper[27819]: I0319 09:49:53.347827 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-combined-ca-bundle\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.348241 master-0 kubenswrapper[27819]: I0319 09:49:53.348219 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2510c5df-c68c-4f52-b572-6367fa71fd77-etc-machine-id\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.348739 master-0 kubenswrapper[27819]: I0319 09:49:53.348720 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.353327 master-0 kubenswrapper[27819]: I0319 09:49:53.353201 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-scripts\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.353725 master-0 kubenswrapper[27819]: I0319 09:49:53.353573 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 19 09:49:53.354041 master-0 kubenswrapper[27819]: I0319 09:49:53.354006 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 19 09:49:53.354455 master-0 kubenswrapper[27819]: I0319 09:49:53.354250 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 19 09:49:53.356153 master-0 kubenswrapper[27819]: I0319 09:49:53.356079 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data-custom\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.357801 master-0 kubenswrapper[27819]: I0319 09:49:53.357727 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-combined-ca-bundle\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.358481 master-0 kubenswrapper[27819]: I0319 09:49:53.358413 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.360402 master-0 kubenswrapper[27819]: I0319 09:49:53.360361 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79b94d96d9-jqhcm"] Mar 19 09:49:53.362507 master-0 kubenswrapper[27819]: I0319 09:49:53.362472 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.449841 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-lib-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.449900 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-brick\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.449930 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.449963 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-brick\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450000 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450039 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-config\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450069 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhgn4\" (UniqueName: \"kubernetes.io/projected/3b665618-cc45-40c7-88c5-563951c4ea1f-kube-api-access-bhgn4\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450101 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-machine-id\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450142 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-combined-ca-bundle\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450173 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-machine-id\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450203 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-combined-ca-bundle\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450232 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-nb\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450267 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data-custom\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450292 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-dev\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450323 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-dev\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450347 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-scripts\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450382 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-scripts\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450402 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-svc\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450421 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr6xz\" (UniqueName: \"kubernetes.io/projected/b67d2371-be56-42c1-9cc1-9323ed72cf7e-kube-api-access-xr6xz\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450440 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-sb\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450468 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnqg6\" (UniqueName: \"kubernetes.io/projected/a65cfcee-1397-46cc-af85-67a07c3e325c-kube-api-access-nnqg6\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450499 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-sys\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450525 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-run\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450565 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-ovndb-tls-certs\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450594 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wsc\" (UniqueName: \"kubernetes.io/projected/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-kube-api-access-f7wsc\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450613 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-lib-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450666 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-run\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450712 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-combined-ca-bundle\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450758 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450790 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-iscsi\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450810 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-iscsi\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450833 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-sys\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450858 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-config\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450902 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-lib-modules\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450928 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-nvme\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450955 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data-custom\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450974 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-lib-modules\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.450993 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-nvme\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.451015 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-httpd-config\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.451034 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-swift-storage-0\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.451062 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.452561 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-lib-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.452656 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-lib-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.452680 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-run\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.452713 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-run\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.453264 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-brick\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.453402 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.453457 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-brick\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.453506 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.453745 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-machine-id\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.453761 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-nvme\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.454618 master-0 kubenswrapper[27819]: I0319 09:49:53.453867 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-sys\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.457561 master-0 kubenswrapper[27819]: I0319 09:49:53.457508 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-lib-modules\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.457681 master-0 kubenswrapper[27819]: I0319 09:49:53.457508 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-iscsi\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.457784 master-0 kubenswrapper[27819]: I0319 09:49:53.457533 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-machine-id\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.457891 master-0 kubenswrapper[27819]: I0319 09:49:53.457868 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-dev\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.457997 master-0 kubenswrapper[27819]: I0319 09:49:53.457980 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-dev\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.466977 master-0 kubenswrapper[27819]: I0319 09:49:53.458049 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-iscsi\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.467108 master-0 kubenswrapper[27819]: I0319 09:49:53.458089 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-nvme\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.467241 master-0 kubenswrapper[27819]: I0319 09:49:53.458328 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-sys\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.467335 master-0 kubenswrapper[27819]: I0319 09:49:53.458441 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-lib-modules\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.481847 master-0 kubenswrapper[27819]: I0319 09:49:53.473671 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-795d6cd54b-mpqdp"] Mar 19 09:49:53.513621 master-0 kubenswrapper[27819]: I0319 09:49:53.499525 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b94d96d9-jqhcm"] Mar 19 09:49:53.513621 master-0 kubenswrapper[27819]: I0319 09:49:53.507436 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-combined-ca-bundle\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.513621 master-0 kubenswrapper[27819]: I0319 09:49:53.513503 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjv76\" (UniqueName: \"kubernetes.io/projected/2510c5df-c68c-4f52-b572-6367fa71fd77-kube-api-access-tjv76\") pod \"cinder-255d6-scheduler-0\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.542286 master-0 kubenswrapper[27819]: I0319 09:49:53.519668 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.542286 master-0 kubenswrapper[27819]: I0319 09:49:53.520717 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data-custom\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.542286 master-0 kubenswrapper[27819]: I0319 09:49:53.521045 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-scripts\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.542286 master-0 kubenswrapper[27819]: I0319 09:49:53.521311 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-scripts\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.542286 master-0 kubenswrapper[27819]: I0319 09:49:53.531477 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data-custom\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.542286 master-0 kubenswrapper[27819]: I0319 09:49:53.535971 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.542286 master-0 kubenswrapper[27819]: I0319 09:49:53.536501 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-combined-ca-bundle\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.553568 master-0 kubenswrapper[27819]: I0319 09:49:53.553510 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-combined-ca-bundle\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.557740 master-0 kubenswrapper[27819]: I0319 09:49:53.557711 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-nb\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.558118 master-0 kubenswrapper[27819]: I0319 09:49:53.557952 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-svc\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.558245 master-0 kubenswrapper[27819]: I0319 09:49:53.558227 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr6xz\" (UniqueName: \"kubernetes.io/projected/b67d2371-be56-42c1-9cc1-9323ed72cf7e-kube-api-access-xr6xz\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.558340 master-0 kubenswrapper[27819]: I0319 09:49:53.558324 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-sb\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.558557 master-0 kubenswrapper[27819]: I0319 09:49:53.557165 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-combined-ca-bundle\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.559674 master-0 kubenswrapper[27819]: I0319 09:49:53.559635 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-svc\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.559749 master-0 kubenswrapper[27819]: I0319 09:49:53.559646 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-nb\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.559749 master-0 kubenswrapper[27819]: I0319 09:49:53.559681 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-ovndb-tls-certs\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.559849 master-0 kubenswrapper[27819]: I0319 09:49:53.559752 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wsc\" (UniqueName: \"kubernetes.io/projected/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-kube-api-access-f7wsc\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.559925 master-0 kubenswrapper[27819]: I0319 09:49:53.559877 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-config\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.560010 master-0 kubenswrapper[27819]: I0319 09:49:53.559985 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-httpd-config\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.560065 master-0 kubenswrapper[27819]: I0319 09:49:53.560028 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-swift-storage-0\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.560170 master-0 kubenswrapper[27819]: I0319 09:49:53.560144 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-config\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.560310 master-0 kubenswrapper[27819]: I0319 09:49:53.560289 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-sb\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.561647 master-0 kubenswrapper[27819]: I0319 09:49:53.561618 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-config\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.571569 master-0 kubenswrapper[27819]: I0319 09:49:53.562961 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-swift-storage-0\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.585635 master-0 kubenswrapper[27819]: I0319 09:49:53.585576 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-config\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.587731 master-0 kubenswrapper[27819]: I0319 09:49:53.587330 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-httpd-config\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.622017 master-0 kubenswrapper[27819]: I0319 09:49:53.615846 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhgn4\" (UniqueName: \"kubernetes.io/projected/3b665618-cc45-40c7-88c5-563951c4ea1f-kube-api-access-bhgn4\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.622017 master-0 kubenswrapper[27819]: I0319 09:49:53.620365 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnqg6\" (UniqueName: \"kubernetes.io/projected/a65cfcee-1397-46cc-af85-67a07c3e325c-kube-api-access-nnqg6\") pod \"cinder-255d6-backup-0\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.632159 master-0 kubenswrapper[27819]: I0319 09:49:53.625501 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-ovndb-tls-certs\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.632159 master-0 kubenswrapper[27819]: I0319 09:49:53.632091 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wsc\" (UniqueName: \"kubernetes.io/projected/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-kube-api-access-f7wsc\") pod \"neutron-795d6cd54b-mpqdp\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.669315 master-0 kubenswrapper[27819]: I0319 09:49:53.669268 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr6xz\" (UniqueName: \"kubernetes.io/projected/b67d2371-be56-42c1-9cc1-9323ed72cf7e-kube-api-access-xr6xz\") pod \"dnsmasq-dns-79b94d96d9-jqhcm\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.693417 master-0 kubenswrapper[27819]: I0319 09:49:53.693280 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:49:53.721960 master-0 kubenswrapper[27819]: I0319 09:49:53.721879 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:53.743803 master-0 kubenswrapper[27819]: I0319 09:49:53.735602 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:49:53.743803 master-0 kubenswrapper[27819]: I0319 09:49:53.737472 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.748528 master-0 kubenswrapper[27819]: I0319 09:49:53.744382 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-api-config-data" Mar 19 09:49:53.748528 master-0 kubenswrapper[27819]: I0319 09:49:53.748031 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:49:53.767510 master-0 kubenswrapper[27819]: I0319 09:49:53.765351 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/321a18ac-d3af-4e1a-bae4-91188771886f-logs\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.767510 master-0 kubenswrapper[27819]: I0319 09:49:53.765422 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-combined-ca-bundle\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.767510 master-0 kubenswrapper[27819]: I0319 09:49:53.765472 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/321a18ac-d3af-4e1a-bae4-91188771886f-etc-machine-id\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.767510 master-0 kubenswrapper[27819]: I0319 09:49:53.765518 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-scripts\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.767510 master-0 kubenswrapper[27819]: I0319 09:49:53.765592 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.767510 master-0 kubenswrapper[27819]: I0319 09:49:53.765656 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data-custom\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.767510 master-0 kubenswrapper[27819]: I0319 09:49:53.765702 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb75s\" (UniqueName: \"kubernetes.io/projected/321a18ac-d3af-4e1a-bae4-91188771886f-kube-api-access-pb75s\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.849486 master-0 kubenswrapper[27819]: I0319 09:49:53.848089 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:53.858610 master-0 kubenswrapper[27819]: I0319 09:49:53.856990 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:49:53.863684 master-0 kubenswrapper[27819]: I0319 09:49:53.863602 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:53.872754 master-0 kubenswrapper[27819]: I0319 09:49:53.871459 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data-custom\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.872754 master-0 kubenswrapper[27819]: I0319 09:49:53.871566 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb75s\" (UniqueName: \"kubernetes.io/projected/321a18ac-d3af-4e1a-bae4-91188771886f-kube-api-access-pb75s\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.872754 master-0 kubenswrapper[27819]: I0319 09:49:53.872005 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/321a18ac-d3af-4e1a-bae4-91188771886f-logs\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.872754 master-0 kubenswrapper[27819]: I0319 09:49:53.872075 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-combined-ca-bundle\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.872754 master-0 kubenswrapper[27819]: I0319 09:49:53.872148 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/321a18ac-d3af-4e1a-bae4-91188771886f-etc-machine-id\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.872754 master-0 kubenswrapper[27819]: I0319 09:49:53.872205 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-scripts\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.872754 master-0 kubenswrapper[27819]: I0319 09:49:53.872266 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.873815 master-0 kubenswrapper[27819]: I0319 09:49:53.873673 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/321a18ac-d3af-4e1a-bae4-91188771886f-etc-machine-id\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.874415 master-0 kubenswrapper[27819]: I0319 09:49:53.874376 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/321a18ac-d3af-4e1a-bae4-91188771886f-logs\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.880339 master-0 kubenswrapper[27819]: I0319 09:49:53.880311 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data-custom\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.884456 master-0 kubenswrapper[27819]: I0319 09:49:53.884098 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-scripts\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.890670 master-0 kubenswrapper[27819]: I0319 09:49:53.890087 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.897915 master-0 kubenswrapper[27819]: I0319 09:49:53.894737 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-combined-ca-bundle\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.903606 master-0 kubenswrapper[27819]: I0319 09:49:53.898558 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb75s\" (UniqueName: \"kubernetes.io/projected/321a18ac-d3af-4e1a-bae4-91188771886f-kube-api-access-pb75s\") pod \"cinder-255d6-api-0\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:49:53.995069 master-0 kubenswrapper[27819]: I0319 09:49:53.995003 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79985679c-k6hv5"] Mar 19 09:49:54.089775 master-0 kubenswrapper[27819]: I0319 09:49:54.089725 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-api-0" Mar 19 09:49:54.745131 master-0 kubenswrapper[27819]: W0319 09:49:54.739629 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda65cfcee_1397_46cc_af85_67a07c3e325c.slice/crio-c30f77e8092274f53ac9f7b71061a9750b976ce8cb0a227613bdf020d51c373f WatchSource:0}: Error finding container c30f77e8092274f53ac9f7b71061a9750b976ce8cb0a227613bdf020d51c373f: Status 404 returned error can't find the container with id c30f77e8092274f53ac9f7b71061a9750b976ce8cb0a227613bdf020d51c373f Mar 19 09:49:54.769851 master-0 kubenswrapper[27819]: I0319 09:49:54.761767 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:49:54.790603 master-0 kubenswrapper[27819]: I0319 09:49:54.780707 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"2510c5df-c68c-4f52-b572-6367fa71fd77","Type":"ContainerStarted","Data":"c26ddc5d38ba3babe875e3098e6de7a21ce6d7942ef247fbb3e0dd19341354b6"} Mar 19 09:49:54.862602 master-0 kubenswrapper[27819]: I0319 09:49:54.859927 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79985679c-k6hv5" podUID="e2ba055e-e43e-4f37-8288-75d4396f6055" containerName="init" containerID="cri-o://6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f" gracePeriod=10 Mar 19 09:49:54.862602 master-0 kubenswrapper[27819]: I0319 09:49:54.860060 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985679c-k6hv5" event={"ID":"e2ba055e-e43e-4f37-8288-75d4396f6055","Type":"ContainerStarted","Data":"6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f"} Mar 19 09:49:54.862602 master-0 kubenswrapper[27819]: I0319 09:49:54.860091 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985679c-k6hv5" event={"ID":"e2ba055e-e43e-4f37-8288-75d4396f6055","Type":"ContainerStarted","Data":"254846a37668dc07bc59f35c0dca7213b6657f1c23799bf4ec79db9501453415"} Mar 19 09:49:54.872362 master-0 kubenswrapper[27819]: I0319 09:49:54.869972 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:49:54.926846 master-0 kubenswrapper[27819]: I0319 09:49:54.924277 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:49:55.047118 master-0 kubenswrapper[27819]: I0319 09:49:55.047071 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79b94d96d9-jqhcm"] Mar 19 09:49:55.127601 master-0 kubenswrapper[27819]: I0319 09:49:55.126616 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:49:55.198147 master-0 kubenswrapper[27819]: I0319 09:49:55.198000 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-795d6cd54b-mpqdp"] Mar 19 09:49:55.864071 master-0 kubenswrapper[27819]: I0319 09:49:55.863940 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:55.880021 master-0 kubenswrapper[27819]: I0319 09:49:55.879969 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"a65cfcee-1397-46cc-af85-67a07c3e325c","Type":"ContainerStarted","Data":"c30f77e8092274f53ac9f7b71061a9750b976ce8cb0a227613bdf020d51c373f"} Mar 19 09:49:55.882119 master-0 kubenswrapper[27819]: I0319 09:49:55.882069 27819 generic.go:334] "Generic (PLEG): container finished" podID="e2ba055e-e43e-4f37-8288-75d4396f6055" containerID="6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f" exitCode=0 Mar 19 09:49:55.882203 master-0 kubenswrapper[27819]: I0319 09:49:55.882140 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985679c-k6hv5" event={"ID":"e2ba055e-e43e-4f37-8288-75d4396f6055","Type":"ContainerDied","Data":"6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f"} Mar 19 09:49:55.883043 master-0 kubenswrapper[27819]: I0319 09:49:55.883018 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79985679c-k6hv5" event={"ID":"e2ba055e-e43e-4f37-8288-75d4396f6055","Type":"ContainerDied","Data":"254846a37668dc07bc59f35c0dca7213b6657f1c23799bf4ec79db9501453415"} Mar 19 09:49:55.883114 master-0 kubenswrapper[27819]: I0319 09:49:55.883056 27819 scope.go:117] "RemoveContainer" containerID="6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f" Mar 19 09:49:55.883206 master-0 kubenswrapper[27819]: I0319 09:49:55.883189 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79985679c-k6hv5" Mar 19 09:49:55.895408 master-0 kubenswrapper[27819]: I0319 09:49:55.895349 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"321a18ac-d3af-4e1a-bae4-91188771886f","Type":"ContainerStarted","Data":"f9017babf2eb8cef987bbefc14233139fc84cfcd10bdde39519dfcd7bfd7b778"} Mar 19 09:49:55.899355 master-0 kubenswrapper[27819]: I0319 09:49:55.899326 27819 generic.go:334] "Generic (PLEG): container finished" podID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerID="f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28" exitCode=0 Mar 19 09:49:55.899482 master-0 kubenswrapper[27819]: I0319 09:49:55.899376 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" event={"ID":"b67d2371-be56-42c1-9cc1-9323ed72cf7e","Type":"ContainerDied","Data":"f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28"} Mar 19 09:49:55.899482 master-0 kubenswrapper[27819]: I0319 09:49:55.899395 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" event={"ID":"b67d2371-be56-42c1-9cc1-9323ed72cf7e","Type":"ContainerStarted","Data":"e58cb939cc3bd2b0736cdb3c28ef9773a2a8d2d4eee610c423166140544ed0a7"} Mar 19 09:49:55.904530 master-0 kubenswrapper[27819]: I0319 09:49:55.904495 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"3b665618-cc45-40c7-88c5-563951c4ea1f","Type":"ContainerStarted","Data":"f757957a18287dc36705c733975e1089e9ba07db73c123b1d1ff852e35a8ce43"} Mar 19 09:49:55.907215 master-0 kubenswrapper[27819]: I0319 09:49:55.907177 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795d6cd54b-mpqdp" event={"ID":"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0","Type":"ContainerStarted","Data":"773154f727dad2f4223683f37ce3ffd5f94657ca52e7a6ad956b5383cc2eda4e"} Mar 19 09:49:55.907215 master-0 kubenswrapper[27819]: I0319 09:49:55.907213 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795d6cd54b-mpqdp" event={"ID":"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0","Type":"ContainerStarted","Data":"eef23d2e93fecc0d92525cbc3a12e883894b312e42f8674c6084b4c09c800e22"} Mar 19 09:49:55.914841 master-0 kubenswrapper[27819]: I0319 09:49:55.914790 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-nb\") pod \"e2ba055e-e43e-4f37-8288-75d4396f6055\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " Mar 19 09:49:55.914909 master-0 kubenswrapper[27819]: I0319 09:49:55.914889 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-swift-storage-0\") pod \"e2ba055e-e43e-4f37-8288-75d4396f6055\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " Mar 19 09:49:55.915053 master-0 kubenswrapper[27819]: I0319 09:49:55.915028 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-sb\") pod \"e2ba055e-e43e-4f37-8288-75d4396f6055\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " Mar 19 09:49:55.915096 master-0 kubenswrapper[27819]: I0319 09:49:55.915061 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-svc\") pod \"e2ba055e-e43e-4f37-8288-75d4396f6055\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " Mar 19 09:49:55.915285 master-0 kubenswrapper[27819]: I0319 09:49:55.915199 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-config\") pod \"e2ba055e-e43e-4f37-8288-75d4396f6055\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " Mar 19 09:49:55.915285 master-0 kubenswrapper[27819]: I0319 09:49:55.915256 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j99hz\" (UniqueName: \"kubernetes.io/projected/e2ba055e-e43e-4f37-8288-75d4396f6055-kube-api-access-j99hz\") pod \"e2ba055e-e43e-4f37-8288-75d4396f6055\" (UID: \"e2ba055e-e43e-4f37-8288-75d4396f6055\") " Mar 19 09:49:55.920482 master-0 kubenswrapper[27819]: I0319 09:49:55.920433 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ba055e-e43e-4f37-8288-75d4396f6055-kube-api-access-j99hz" (OuterVolumeSpecName: "kube-api-access-j99hz") pod "e2ba055e-e43e-4f37-8288-75d4396f6055" (UID: "e2ba055e-e43e-4f37-8288-75d4396f6055"). InnerVolumeSpecName "kube-api-access-j99hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:49:55.935229 master-0 kubenswrapper[27819]: I0319 09:49:55.935151 27819 scope.go:117] "RemoveContainer" containerID="6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f" Mar 19 09:49:55.946564 master-0 kubenswrapper[27819]: E0319 09:49:55.937057 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f\": container with ID starting with 6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f not found: ID does not exist" containerID="6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f" Mar 19 09:49:55.946564 master-0 kubenswrapper[27819]: I0319 09:49:55.937101 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f"} err="failed to get container status \"6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f\": rpc error: code = NotFound desc = could not find container \"6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f\": container with ID starting with 6f2d3f2a9ac1579b3884fd368be9c029cfae03b248cbd4ead79e00c196c68e3f not found: ID does not exist" Mar 19 09:49:55.967126 master-0 kubenswrapper[27819]: I0319 09:49:55.967073 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2ba055e-e43e-4f37-8288-75d4396f6055" (UID: "e2ba055e-e43e-4f37-8288-75d4396f6055"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:55.979451 master-0 kubenswrapper[27819]: I0319 09:49:55.979323 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2ba055e-e43e-4f37-8288-75d4396f6055" (UID: "e2ba055e-e43e-4f37-8288-75d4396f6055"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:55.994283 master-0 kubenswrapper[27819]: I0319 09:49:55.994231 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2ba055e-e43e-4f37-8288-75d4396f6055" (UID: "e2ba055e-e43e-4f37-8288-75d4396f6055"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:56.012824 master-0 kubenswrapper[27819]: I0319 09:49:56.012764 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2ba055e-e43e-4f37-8288-75d4396f6055" (UID: "e2ba055e-e43e-4f37-8288-75d4396f6055"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:56.017300 master-0 kubenswrapper[27819]: I0319 09:49:56.017253 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:56.017300 master-0 kubenswrapper[27819]: I0319 09:49:56.017284 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:56.017300 master-0 kubenswrapper[27819]: I0319 09:49:56.017297 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:56.017300 master-0 kubenswrapper[27819]: I0319 09:49:56.017306 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:56.017565 master-0 kubenswrapper[27819]: I0319 09:49:56.017316 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j99hz\" (UniqueName: \"kubernetes.io/projected/e2ba055e-e43e-4f37-8288-75d4396f6055-kube-api-access-j99hz\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:56.035396 master-0 kubenswrapper[27819]: I0319 09:49:56.035339 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-config" (OuterVolumeSpecName: "config") pod "e2ba055e-e43e-4f37-8288-75d4396f6055" (UID: "e2ba055e-e43e-4f37-8288-75d4396f6055"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:49:56.121118 master-0 kubenswrapper[27819]: I0319 09:49:56.120955 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2ba055e-e43e-4f37-8288-75d4396f6055-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:49:56.928676 master-0 kubenswrapper[27819]: I0319 09:49:56.922885 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795d6cd54b-mpqdp" event={"ID":"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0","Type":"ContainerStarted","Data":"2e2b4283771e634f66af83b320d8bb1c6334ae2d8468bf708b29ea5ace96a062"} Mar 19 09:49:56.928676 master-0 kubenswrapper[27819]: I0319 09:49:56.923878 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:49:56.928676 master-0 kubenswrapper[27819]: I0319 09:49:56.928155 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"a65cfcee-1397-46cc-af85-67a07c3e325c","Type":"ContainerStarted","Data":"31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456"} Mar 19 09:49:56.931818 master-0 kubenswrapper[27819]: I0319 09:49:56.931792 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" event={"ID":"b67d2371-be56-42c1-9cc1-9323ed72cf7e","Type":"ContainerStarted","Data":"f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc"} Mar 19 09:49:56.932270 master-0 kubenswrapper[27819]: I0319 09:49:56.931976 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:49:56.933856 master-0 kubenswrapper[27819]: I0319 09:49:56.933478 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"321a18ac-d3af-4e1a-bae4-91188771886f","Type":"ContainerStarted","Data":"287c611a82a89cf125a13cd723a3fd9366fbea3a043315edd46a44f638932a1f"} Mar 19 09:49:57.406626 master-0 kubenswrapper[27819]: I0319 09:49:57.402313 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-795d6cd54b-mpqdp" podStartSLOduration=4.402288975 podStartE2EDuration="4.402288975s" podCreationTimestamp="2026-03-19 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:57.315285192 +0000 UTC m=+982.236862894" watchObservedRunningTime="2026-03-19 09:49:57.402288975 +0000 UTC m=+982.323866667" Mar 19 09:49:57.493113 master-0 kubenswrapper[27819]: I0319 09:49:57.492717 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" podStartSLOduration=4.492698837 podStartE2EDuration="4.492698837s" podCreationTimestamp="2026-03-19 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:57.395325885 +0000 UTC m=+982.316903587" watchObservedRunningTime="2026-03-19 09:49:57.492698837 +0000 UTC m=+982.414276529" Mar 19 09:49:57.520654 master-0 kubenswrapper[27819]: I0319 09:49:57.520589 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79985679c-k6hv5"] Mar 19 09:49:57.520933 master-0 kubenswrapper[27819]: I0319 09:49:57.520895 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79985679c-k6hv5"] Mar 19 09:49:57.578588 master-0 kubenswrapper[27819]: I0319 09:49:57.576874 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:49:57.968785 master-0 kubenswrapper[27819]: I0319 09:49:57.967761 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"a65cfcee-1397-46cc-af85-67a07c3e325c","Type":"ContainerStarted","Data":"47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c"} Mar 19 09:49:57.978669 master-0 kubenswrapper[27819]: I0319 09:49:57.978625 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"321a18ac-d3af-4e1a-bae4-91188771886f","Type":"ContainerStarted","Data":"26d8e61b953a8bffcad3912a61a992666a47b045d8e247b02a42c08a67cc076e"} Mar 19 09:49:57.978919 master-0 kubenswrapper[27819]: I0319 09:49:57.978836 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-api-0" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-255d6-api-log" containerID="cri-o://287c611a82a89cf125a13cd723a3fd9366fbea3a043315edd46a44f638932a1f" gracePeriod=30 Mar 19 09:49:57.979105 master-0 kubenswrapper[27819]: I0319 09:49:57.979090 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-255d6-api-0" Mar 19 09:49:57.979199 master-0 kubenswrapper[27819]: I0319 09:49:57.979143 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-api-0" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-api" containerID="cri-o://26d8e61b953a8bffcad3912a61a992666a47b045d8e247b02a42c08a67cc076e" gracePeriod=30 Mar 19 09:49:57.990939 master-0 kubenswrapper[27819]: I0319 09:49:57.990485 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"2510c5df-c68c-4f52-b572-6367fa71fd77","Type":"ContainerStarted","Data":"dff42349c25d0f566f9088e42f49853ad4960376424175b12c0eb7abdbd9a7eb"} Mar 19 09:49:58.075098 master-0 kubenswrapper[27819]: I0319 09:49:58.065785 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-backup-0" podStartSLOduration=3.900478787 podStartE2EDuration="5.065751212s" podCreationTimestamp="2026-03-19 09:49:53 +0000 UTC" firstStartedPulling="2026-03-19 09:49:54.855608476 +0000 UTC m=+979.777186168" lastFinishedPulling="2026-03-19 09:49:56.020880901 +0000 UTC m=+980.942458593" observedRunningTime="2026-03-19 09:49:58.036109204 +0000 UTC m=+982.957686906" watchObservedRunningTime="2026-03-19 09:49:58.065751212 +0000 UTC m=+982.987328904" Mar 19 09:49:58.102334 master-0 kubenswrapper[27819]: I0319 09:49:58.101410 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-api-0" podStartSLOduration=5.101384245 podStartE2EDuration="5.101384245s" podCreationTimestamp="2026-03-19 09:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:49:58.090929944 +0000 UTC m=+983.012507636" watchObservedRunningTime="2026-03-19 09:49:58.101384245 +0000 UTC m=+983.022961937" Mar 19 09:49:58.722386 master-0 kubenswrapper[27819]: I0319 09:49:58.722290 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-backup-0" Mar 19 09:49:59.016707 master-0 kubenswrapper[27819]: I0319 09:49:59.016455 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"2510c5df-c68c-4f52-b572-6367fa71fd77","Type":"ContainerStarted","Data":"81226ef575ecc13d89e1c7566d0981749357755277809d2a9f7a25027e9b68a7"} Mar 19 09:49:59.021014 master-0 kubenswrapper[27819]: I0319 09:49:59.020967 27819 generic.go:334] "Generic (PLEG): container finished" podID="321a18ac-d3af-4e1a-bae4-91188771886f" containerID="287c611a82a89cf125a13cd723a3fd9366fbea3a043315edd46a44f638932a1f" exitCode=143 Mar 19 09:49:59.021358 master-0 kubenswrapper[27819]: I0319 09:49:59.021031 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"321a18ac-d3af-4e1a-bae4-91188771886f","Type":"ContainerDied","Data":"287c611a82a89cf125a13cd723a3fd9366fbea3a043315edd46a44f638932a1f"} Mar 19 09:49:59.024987 master-0 kubenswrapper[27819]: I0319 09:49:59.024950 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"3b665618-cc45-40c7-88c5-563951c4ea1f","Type":"ContainerStarted","Data":"02a27e0081b779b16abdf2a07e87b88cc5763d470b13fc3784f80723f0430c9a"} Mar 19 09:49:59.024987 master-0 kubenswrapper[27819]: I0319 09:49:59.024988 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"3b665618-cc45-40c7-88c5-563951c4ea1f","Type":"ContainerStarted","Data":"4bdf4d30e5c9feb5390d1af70e36c977732cf13d98296b3a877c1d8a88e60da2"} Mar 19 09:49:59.044674 master-0 kubenswrapper[27819]: I0319 09:49:59.043521 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-scheduler-0" podStartSLOduration=5.690807169 podStartE2EDuration="7.04350114s" podCreationTimestamp="2026-03-19 09:49:52 +0000 UTC" firstStartedPulling="2026-03-19 09:49:54.667456542 +0000 UTC m=+979.589034234" lastFinishedPulling="2026-03-19 09:49:56.020150513 +0000 UTC m=+980.941728205" observedRunningTime="2026-03-19 09:49:59.038654434 +0000 UTC m=+983.960232136" watchObservedRunningTime="2026-03-19 09:49:59.04350114 +0000 UTC m=+983.965078832" Mar 19 09:49:59.093032 master-0 kubenswrapper[27819]: I0319 09:49:59.092951 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podStartSLOduration=4.406436128 podStartE2EDuration="7.09292798s" podCreationTimestamp="2026-03-19 09:49:52 +0000 UTC" firstStartedPulling="2026-03-19 09:49:55.011694459 +0000 UTC m=+979.933272161" lastFinishedPulling="2026-03-19 09:49:57.698186321 +0000 UTC m=+982.619764013" observedRunningTime="2026-03-19 09:49:59.075055237 +0000 UTC m=+983.996632929" watchObservedRunningTime="2026-03-19 09:49:59.09292798 +0000 UTC m=+984.014505672" Mar 19 09:49:59.301360 master-0 kubenswrapper[27819]: I0319 09:49:59.301203 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2ba055e-e43e-4f37-8288-75d4396f6055" path="/var/lib/kubelet/pods/e2ba055e-e43e-4f37-8288-75d4396f6055/volumes" Mar 19 09:50:00.279848 master-0 kubenswrapper[27819]: I0319 09:50:00.279780 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-865fc7c8cc-c7jz9"] Mar 19 09:50:00.280665 master-0 kubenswrapper[27819]: E0319 09:50:00.280339 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ba055e-e43e-4f37-8288-75d4396f6055" containerName="init" Mar 19 09:50:00.280665 master-0 kubenswrapper[27819]: I0319 09:50:00.280361 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ba055e-e43e-4f37-8288-75d4396f6055" containerName="init" Mar 19 09:50:00.280786 master-0 kubenswrapper[27819]: I0319 09:50:00.280710 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ba055e-e43e-4f37-8288-75d4396f6055" containerName="init" Mar 19 09:50:00.283701 master-0 kubenswrapper[27819]: I0319 09:50:00.283090 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.287837 master-0 kubenswrapper[27819]: I0319 09:50:00.287797 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 19 09:50:00.291381 master-0 kubenswrapper[27819]: I0319 09:50:00.291320 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 19 09:50:00.329098 master-0 kubenswrapper[27819]: I0319 09:50:00.323354 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-865fc7c8cc-c7jz9"] Mar 19 09:50:00.393137 master-0 kubenswrapper[27819]: I0319 09:50:00.393060 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-config\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.393137 master-0 kubenswrapper[27819]: I0319 09:50:00.393134 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-internal-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.393390 master-0 kubenswrapper[27819]: I0319 09:50:00.393198 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-combined-ca-bundle\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.393390 master-0 kubenswrapper[27819]: I0319 09:50:00.393242 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2kb\" (UniqueName: \"kubernetes.io/projected/a78c8a72-ff74-4e31-957e-e04d67f734f4-kube-api-access-ld2kb\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.393390 master-0 kubenswrapper[27819]: I0319 09:50:00.393274 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-httpd-config\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.393390 master-0 kubenswrapper[27819]: I0319 09:50:00.393338 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-public-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.393510 master-0 kubenswrapper[27819]: I0319 09:50:00.393473 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-ovndb-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.495297 master-0 kubenswrapper[27819]: I0319 09:50:00.495247 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-combined-ca-bundle\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.495608 master-0 kubenswrapper[27819]: I0319 09:50:00.495591 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2kb\" (UniqueName: \"kubernetes.io/projected/a78c8a72-ff74-4e31-957e-e04d67f734f4-kube-api-access-ld2kb\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.495718 master-0 kubenswrapper[27819]: I0319 09:50:00.495704 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-httpd-config\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.495853 master-0 kubenswrapper[27819]: I0319 09:50:00.495840 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-public-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.495959 master-0 kubenswrapper[27819]: I0319 09:50:00.495946 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-ovndb-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.496181 master-0 kubenswrapper[27819]: I0319 09:50:00.496168 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-config\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.497708 master-0 kubenswrapper[27819]: I0319 09:50:00.497663 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-internal-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.500007 master-0 kubenswrapper[27819]: I0319 09:50:00.499969 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-combined-ca-bundle\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.502171 master-0 kubenswrapper[27819]: I0319 09:50:00.501144 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-public-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.503574 master-0 kubenswrapper[27819]: I0319 09:50:00.503529 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-ovndb-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.514647 master-0 kubenswrapper[27819]: I0319 09:50:00.512203 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-internal-tls-certs\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.514647 master-0 kubenswrapper[27819]: I0319 09:50:00.512421 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-httpd-config\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.514647 master-0 kubenswrapper[27819]: I0319 09:50:00.512742 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a78c8a72-ff74-4e31-957e-e04d67f734f4-config\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.524073 master-0 kubenswrapper[27819]: I0319 09:50:00.523583 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2kb\" (UniqueName: \"kubernetes.io/projected/a78c8a72-ff74-4e31-957e-e04d67f734f4-kube-api-access-ld2kb\") pod \"neutron-865fc7c8cc-c7jz9\" (UID: \"a78c8a72-ff74-4e31-957e-e04d67f734f4\") " pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:00.625485 master-0 kubenswrapper[27819]: I0319 09:50:00.625399 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:01.246360 master-0 kubenswrapper[27819]: I0319 09:50:01.246293 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-865fc7c8cc-c7jz9"] Mar 19 09:50:01.260327 master-0 kubenswrapper[27819]: W0319 09:50:01.259937 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda78c8a72_ff74_4e31_957e_e04d67f734f4.slice/crio-220b70b7845efac3d53853453705289a4c65d80f8cf0236e0ddeb929c59c5266 WatchSource:0}: Error finding container 220b70b7845efac3d53853453705289a4c65d80f8cf0236e0ddeb929c59c5266: Status 404 returned error can't find the container with id 220b70b7845efac3d53853453705289a4c65d80f8cf0236e0ddeb929c59c5266 Mar 19 09:50:02.063155 master-0 kubenswrapper[27819]: I0319 09:50:02.063014 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc7c8cc-c7jz9" event={"ID":"a78c8a72-ff74-4e31-957e-e04d67f734f4","Type":"ContainerStarted","Data":"8a38f9789b02f417878cb9190282ff8d08175a4d8a2ea46d35fd67808a9fedb9"} Mar 19 09:50:02.063155 master-0 kubenswrapper[27819]: I0319 09:50:02.063072 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc7c8cc-c7jz9" event={"ID":"a78c8a72-ff74-4e31-957e-e04d67f734f4","Type":"ContainerStarted","Data":"94ecf028e181439b7f3467c7825f09ac705f708fda4e15ee3791b9267a5604be"} Mar 19 09:50:02.063155 master-0 kubenswrapper[27819]: I0319 09:50:02.063083 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-865fc7c8cc-c7jz9" event={"ID":"a78c8a72-ff74-4e31-957e-e04d67f734f4","Type":"ContainerStarted","Data":"220b70b7845efac3d53853453705289a4c65d80f8cf0236e0ddeb929c59c5266"} Mar 19 09:50:02.064084 master-0 kubenswrapper[27819]: I0319 09:50:02.064060 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:02.092527 master-0 kubenswrapper[27819]: I0319 09:50:02.092453 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-865fc7c8cc-c7jz9" podStartSLOduration=2.092433999 podStartE2EDuration="2.092433999s" podCreationTimestamp="2026-03-19 09:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:02.091634149 +0000 UTC m=+987.013211841" watchObservedRunningTime="2026-03-19 09:50:02.092433999 +0000 UTC m=+987.014011681" Mar 19 09:50:03.694242 master-0 kubenswrapper[27819]: I0319 09:50:03.694182 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:03.858598 master-0 kubenswrapper[27819]: I0319 09:50:03.858501 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:03.869492 master-0 kubenswrapper[27819]: I0319 09:50:03.867989 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:50:04.040390 master-0 kubenswrapper[27819]: I0319 09:50:04.039554 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c7b89c5-sw8qr"] Mar 19 09:50:04.040390 master-0 kubenswrapper[27819]: I0319 09:50:04.039849 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" podUID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerName="dnsmasq-dns" containerID="cri-o://18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5" gracePeriod=10 Mar 19 09:50:04.065670 master-0 kubenswrapper[27819]: I0319 09:50:04.050402 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:04.065670 master-0 kubenswrapper[27819]: I0319 09:50:04.050499 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:04.215401 master-0 kubenswrapper[27819]: I0319 09:50:04.215344 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:50:04.215617 master-0 kubenswrapper[27819]: I0319 09:50:04.215578 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-backup-0" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="cinder-backup" containerID="cri-o://31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456" gracePeriod=30 Mar 19 09:50:04.215932 master-0 kubenswrapper[27819]: I0319 09:50:04.215720 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-backup-0" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="probe" containerID="cri-o://47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c" gracePeriod=30 Mar 19 09:50:04.266430 master-0 kubenswrapper[27819]: I0319 09:50:04.266372 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:50:04.533503 master-0 kubenswrapper[27819]: I0319 09:50:04.525425 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:04.622435 master-0 kubenswrapper[27819]: I0319 09:50:04.622340 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:50:04.868337 master-0 kubenswrapper[27819]: I0319 09:50:04.867846 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:50:04.973622 master-0 kubenswrapper[27819]: I0319 09:50:04.973504 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-swift-storage-0\") pod \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " Mar 19 09:50:04.975220 master-0 kubenswrapper[27819]: I0319 09:50:04.973996 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpn99\" (UniqueName: \"kubernetes.io/projected/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-kube-api-access-zpn99\") pod \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " Mar 19 09:50:04.975220 master-0 kubenswrapper[27819]: I0319 09:50:04.974064 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-nb\") pod \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " Mar 19 09:50:04.975220 master-0 kubenswrapper[27819]: I0319 09:50:04.974202 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-svc\") pod \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " Mar 19 09:50:04.975220 master-0 kubenswrapper[27819]: I0319 09:50:04.974269 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-config\") pod \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " Mar 19 09:50:04.975220 master-0 kubenswrapper[27819]: I0319 09:50:04.974306 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-sb\") pod \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\" (UID: \"5934b50c-8a57-4df0-83a5-a6cf7279d7f8\") " Mar 19 09:50:05.038943 master-0 kubenswrapper[27819]: I0319 09:50:05.038847 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-kube-api-access-zpn99" (OuterVolumeSpecName: "kube-api-access-zpn99") pod "5934b50c-8a57-4df0-83a5-a6cf7279d7f8" (UID: "5934b50c-8a57-4df0-83a5-a6cf7279d7f8"). InnerVolumeSpecName "kube-api-access-zpn99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:05.080050 master-0 kubenswrapper[27819]: I0319 09:50:05.079981 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpn99\" (UniqueName: \"kubernetes.io/projected/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-kube-api-access-zpn99\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:05.114968 master-0 kubenswrapper[27819]: I0319 09:50:05.114789 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5934b50c-8a57-4df0-83a5-a6cf7279d7f8" (UID: "5934b50c-8a57-4df0-83a5-a6cf7279d7f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:05.151213 master-0 kubenswrapper[27819]: I0319 09:50:05.151114 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5934b50c-8a57-4df0-83a5-a6cf7279d7f8" (UID: "5934b50c-8a57-4df0-83a5-a6cf7279d7f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:05.173419 master-0 kubenswrapper[27819]: I0319 09:50:05.173277 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-config" (OuterVolumeSpecName: "config") pod "5934b50c-8a57-4df0-83a5-a6cf7279d7f8" (UID: "5934b50c-8a57-4df0-83a5-a6cf7279d7f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:05.180063 master-0 kubenswrapper[27819]: I0319 09:50:05.180011 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5934b50c-8a57-4df0-83a5-a6cf7279d7f8" (UID: "5934b50c-8a57-4df0-83a5-a6cf7279d7f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:05.187807 master-0 kubenswrapper[27819]: I0319 09:50:05.187729 27819 generic.go:334] "Generic (PLEG): container finished" podID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerID="18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5" exitCode=0 Mar 19 09:50:05.188015 master-0 kubenswrapper[27819]: I0319 09:50:05.187978 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-scheduler-0" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="cinder-scheduler" containerID="cri-o://dff42349c25d0f566f9088e42f49853ad4960376424175b12c0eb7abdbd9a7eb" gracePeriod=30 Mar 19 09:50:05.188419 master-0 kubenswrapper[27819]: I0319 09:50:05.188381 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" Mar 19 09:50:05.188971 master-0 kubenswrapper[27819]: I0319 09:50:05.188942 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" event={"ID":"5934b50c-8a57-4df0-83a5-a6cf7279d7f8","Type":"ContainerDied","Data":"18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5"} Mar 19 09:50:05.189027 master-0 kubenswrapper[27819]: I0319 09:50:05.188980 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c7b89c5-sw8qr" event={"ID":"5934b50c-8a57-4df0-83a5-a6cf7279d7f8","Type":"ContainerDied","Data":"fa4763b947d4afe309fab8e168158ad1b36d0c981ae0a248a54ccb22b12ab041"} Mar 19 09:50:05.189027 master-0 kubenswrapper[27819]: I0319 09:50:05.189001 27819 scope.go:117] "RemoveContainer" containerID="18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5" Mar 19 09:50:05.189244 master-0 kubenswrapper[27819]: I0319 09:50:05.189218 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="cinder-volume" containerID="cri-o://4bdf4d30e5c9feb5390d1af70e36c977732cf13d98296b3a877c1d8a88e60da2" gracePeriod=30 Mar 19 09:50:05.191435 master-0 kubenswrapper[27819]: I0319 09:50:05.191279 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-scheduler-0" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="probe" containerID="cri-o://81226ef575ecc13d89e1c7566d0981749357755277809d2a9f7a25027e9b68a7" gracePeriod=30 Mar 19 09:50:05.191435 master-0 kubenswrapper[27819]: I0319 09:50:05.191322 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="probe" containerID="cri-o://02a27e0081b779b16abdf2a07e87b88cc5763d470b13fc3784f80723f0430c9a" gracePeriod=30 Mar 19 09:50:05.201974 master-0 kubenswrapper[27819]: I0319 09:50:05.201882 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:05.201974 master-0 kubenswrapper[27819]: I0319 09:50:05.201931 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:05.201974 master-0 kubenswrapper[27819]: I0319 09:50:05.201943 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:05.201974 master-0 kubenswrapper[27819]: I0319 09:50:05.201952 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:05.215142 master-0 kubenswrapper[27819]: I0319 09:50:05.215094 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5934b50c-8a57-4df0-83a5-a6cf7279d7f8" (UID: "5934b50c-8a57-4df0-83a5-a6cf7279d7f8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:05.313770 master-0 kubenswrapper[27819]: I0319 09:50:05.304697 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5934b50c-8a57-4df0-83a5-a6cf7279d7f8-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:05.358525 master-0 kubenswrapper[27819]: I0319 09:50:05.356767 27819 scope.go:117] "RemoveContainer" containerID="49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03" Mar 19 09:50:05.411644 master-0 kubenswrapper[27819]: I0319 09:50:05.410336 27819 scope.go:117] "RemoveContainer" containerID="18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5" Mar 19 09:50:05.411945 master-0 kubenswrapper[27819]: E0319 09:50:05.411834 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5\": container with ID starting with 18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5 not found: ID does not exist" containerID="18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5" Mar 19 09:50:05.411945 master-0 kubenswrapper[27819]: I0319 09:50:05.411872 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5"} err="failed to get container status \"18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5\": rpc error: code = NotFound desc = could not find container \"18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5\": container with ID starting with 18c3f588b285cdcfc995bdb66eb5e34fb159db12e0e57e29b67c4089398120d5 not found: ID does not exist" Mar 19 09:50:05.411945 master-0 kubenswrapper[27819]: I0319 09:50:05.411891 27819 scope.go:117] "RemoveContainer" containerID="49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03" Mar 19 09:50:05.413353 master-0 kubenswrapper[27819]: E0319 09:50:05.412269 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03\": container with ID starting with 49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03 not found: ID does not exist" containerID="49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03" Mar 19 09:50:05.413353 master-0 kubenswrapper[27819]: I0319 09:50:05.412296 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03"} err="failed to get container status \"49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03\": rpc error: code = NotFound desc = could not find container \"49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03\": container with ID starting with 49931eff7257cdcb10ab8ba8008a182095c8bd2d67409cfc84b1131d3b8aff03 not found: ID does not exist" Mar 19 09:50:05.545568 master-0 kubenswrapper[27819]: I0319 09:50:05.543616 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c7b89c5-sw8qr"] Mar 19 09:50:05.560568 master-0 kubenswrapper[27819]: I0319 09:50:05.557667 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c7b89c5-sw8qr"] Mar 19 09:50:06.227215 master-0 kubenswrapper[27819]: I0319 09:50:06.227005 27819 generic.go:334] "Generic (PLEG): container finished" podID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerID="47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c" exitCode=0 Mar 19 09:50:06.227215 master-0 kubenswrapper[27819]: I0319 09:50:06.227135 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"a65cfcee-1397-46cc-af85-67a07c3e325c","Type":"ContainerDied","Data":"47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c"} Mar 19 09:50:06.229520 master-0 kubenswrapper[27819]: I0319 09:50:06.229470 27819 generic.go:334] "Generic (PLEG): container finished" podID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerID="02a27e0081b779b16abdf2a07e87b88cc5763d470b13fc3784f80723f0430c9a" exitCode=0 Mar 19 09:50:06.229520 master-0 kubenswrapper[27819]: I0319 09:50:06.229511 27819 generic.go:334] "Generic (PLEG): container finished" podID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerID="4bdf4d30e5c9feb5390d1af70e36c977732cf13d98296b3a877c1d8a88e60da2" exitCode=0 Mar 19 09:50:06.229835 master-0 kubenswrapper[27819]: I0319 09:50:06.229577 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"3b665618-cc45-40c7-88c5-563951c4ea1f","Type":"ContainerDied","Data":"02a27e0081b779b16abdf2a07e87b88cc5763d470b13fc3784f80723f0430c9a"} Mar 19 09:50:06.229835 master-0 kubenswrapper[27819]: I0319 09:50:06.229608 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"3b665618-cc45-40c7-88c5-563951c4ea1f","Type":"ContainerDied","Data":"4bdf4d30e5c9feb5390d1af70e36c977732cf13d98296b3a877c1d8a88e60da2"} Mar 19 09:50:06.231932 master-0 kubenswrapper[27819]: I0319 09:50:06.231883 27819 generic.go:334] "Generic (PLEG): container finished" podID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerID="81226ef575ecc13d89e1c7566d0981749357755277809d2a9f7a25027e9b68a7" exitCode=0 Mar 19 09:50:06.232031 master-0 kubenswrapper[27819]: I0319 09:50:06.231933 27819 generic.go:334] "Generic (PLEG): container finished" podID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerID="dff42349c25d0f566f9088e42f49853ad4960376424175b12c0eb7abdbd9a7eb" exitCode=0 Mar 19 09:50:06.232031 master-0 kubenswrapper[27819]: I0319 09:50:06.231972 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"2510c5df-c68c-4f52-b572-6367fa71fd77","Type":"ContainerDied","Data":"81226ef575ecc13d89e1c7566d0981749357755277809d2a9f7a25027e9b68a7"} Mar 19 09:50:06.232031 master-0 kubenswrapper[27819]: I0319 09:50:06.232005 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"2510c5df-c68c-4f52-b572-6367fa71fd77","Type":"ContainerDied","Data":"dff42349c25d0f566f9088e42f49853ad4960376424175b12c0eb7abdbd9a7eb"} Mar 19 09:50:06.697165 master-0 kubenswrapper[27819]: I0319 09:50:06.697049 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762311 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-scripts\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762415 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-iscsi\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762477 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-combined-ca-bundle\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762560 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-cinder\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762591 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-run\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762628 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762690 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data-custom\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762727 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-lib-cinder\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762841 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-sys\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762857 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-lib-modules\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762880 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhgn4\" (UniqueName: \"kubernetes.io/projected/3b665618-cc45-40c7-88c5-563951c4ea1f-kube-api-access-bhgn4\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762901 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-dev\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762918 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-brick\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762946 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-machine-id\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.762985 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-nvme\") pod \"3b665618-cc45-40c7-88c5-563951c4ea1f\" (UID: \"3b665618-cc45-40c7-88c5-563951c4ea1f\") " Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.763521 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.763570 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.763756 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-sys" (OuterVolumeSpecName: "sys") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.763827 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.763856 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-run" (OuterVolumeSpecName: "run") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.764257 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-dev" (OuterVolumeSpecName: "dev") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.764292 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.766810 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.766915 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.788273 master-0 kubenswrapper[27819]: I0319 09:50:06.766945 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:06.799447 master-0 kubenswrapper[27819]: I0319 09:50:06.795038 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b665618-cc45-40c7-88c5-563951c4ea1f-kube-api-access-bhgn4" (OuterVolumeSpecName: "kube-api-access-bhgn4") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "kube-api-access-bhgn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:06.799447 master-0 kubenswrapper[27819]: I0319 09:50:06.797344 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:06.821576 master-0 kubenswrapper[27819]: I0319 09:50:06.820419 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-scripts" (OuterVolumeSpecName: "scripts") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868392 27819 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868449 27819 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-run\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868460 27819 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868468 27819 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868477 27819 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-sys\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868485 27819 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868494 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhgn4\" (UniqueName: \"kubernetes.io/projected/3b665618-cc45-40c7-88c5-563951c4ea1f-kube-api-access-bhgn4\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868505 27819 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-dev\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868517 27819 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868526 27819 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868534 27819 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868557 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.875582 master-0 kubenswrapper[27819]: I0319 09:50:06.868567 27819 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3b665618-cc45-40c7-88c5-563951c4ea1f-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.906155 master-0 kubenswrapper[27819]: I0319 09:50:06.906007 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:06.916576 master-0 kubenswrapper[27819]: I0319 09:50:06.916226 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:06.953636 master-0 kubenswrapper[27819]: I0319 09:50:06.951720 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data" (OuterVolumeSpecName: "config-data") pod "3b665618-cc45-40c7-88c5-563951c4ea1f" (UID: "3b665618-cc45-40c7-88c5-563951c4ea1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:06.973568 master-0 kubenswrapper[27819]: I0319 09:50:06.970786 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:06.973568 master-0 kubenswrapper[27819]: I0319 09:50:06.970837 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b665618-cc45-40c7-88c5-563951c4ea1f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.023573 master-0 kubenswrapper[27819]: I0319 09:50:07.022946 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.072581 master-0 kubenswrapper[27819]: I0319 09:50:07.072324 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjv76\" (UniqueName: \"kubernetes.io/projected/2510c5df-c68c-4f52-b572-6367fa71fd77-kube-api-access-tjv76\") pod \"2510c5df-c68c-4f52-b572-6367fa71fd77\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " Mar 19 09:50:07.072581 master-0 kubenswrapper[27819]: I0319 09:50:07.072397 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data-custom\") pod \"2510c5df-c68c-4f52-b572-6367fa71fd77\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073565 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-combined-ca-bundle\") pod \"2510c5df-c68c-4f52-b572-6367fa71fd77\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073694 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073725 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-brick\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073742 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data-custom\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073787 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-combined-ca-bundle\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073805 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data\") pod \"2510c5df-c68c-4f52-b572-6367fa71fd77\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073833 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2510c5df-c68c-4f52-b572-6367fa71fd77-etc-machine-id\") pod \"2510c5df-c68c-4f52-b572-6367fa71fd77\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073848 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-cinder\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073869 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-lib-modules\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073883 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-nvme\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073904 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-machine-id\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073926 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-scripts\") pod \"2510c5df-c68c-4f52-b572-6367fa71fd77\" (UID: \"2510c5df-c68c-4f52-b572-6367fa71fd77\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.073940 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-sys\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.074376 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-sys" (OuterVolumeSpecName: "sys") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.075130 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2510c5df-c68c-4f52-b572-6367fa71fd77-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2510c5df-c68c-4f52-b572-6367fa71fd77" (UID: "2510c5df-c68c-4f52-b572-6367fa71fd77"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.075584 master-0 kubenswrapper[27819]: I0319 09:50:07.075188 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.078630 master-0 kubenswrapper[27819]: I0319 09:50:07.076558 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.078630 master-0 kubenswrapper[27819]: I0319 09:50:07.076821 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.078630 master-0 kubenswrapper[27819]: I0319 09:50:07.076855 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.078630 master-0 kubenswrapper[27819]: I0319 09:50:07.076900 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.099586 master-0 kubenswrapper[27819]: I0319 09:50:07.099365 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2510c5df-c68c-4f52-b572-6367fa71fd77-kube-api-access-tjv76" (OuterVolumeSpecName: "kube-api-access-tjv76") pod "2510c5df-c68c-4f52-b572-6367fa71fd77" (UID: "2510c5df-c68c-4f52-b572-6367fa71fd77"). InnerVolumeSpecName "kube-api-access-tjv76". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:07.099586 master-0 kubenswrapper[27819]: I0319 09:50:07.099391 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2510c5df-c68c-4f52-b572-6367fa71fd77" (UID: "2510c5df-c68c-4f52-b572-6367fa71fd77"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.099586 master-0 kubenswrapper[27819]: I0319 09:50:07.099391 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.103579 master-0 kubenswrapper[27819]: I0319 09:50:07.102116 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-scripts" (OuterVolumeSpecName: "scripts") pod "2510c5df-c68c-4f52-b572-6367fa71fd77" (UID: "2510c5df-c68c-4f52-b572-6367fa71fd77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.175565 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-lib-cinder\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.175635 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-run\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.175696 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnqg6\" (UniqueName: \"kubernetes.io/projected/a65cfcee-1397-46cc-af85-67a07c3e325c-kube-api-access-nnqg6\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.175751 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-scripts\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.175784 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-dev\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.175811 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-iscsi\") pod \"a65cfcee-1397-46cc-af85-67a07c3e325c\" (UID: \"a65cfcee-1397-46cc-af85-67a07c3e325c\") " Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.176849 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177602 27819 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177623 27819 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-sys\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177638 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177650 27819 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177665 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjv76\" (UniqueName: \"kubernetes.io/projected/2510c5df-c68c-4f52-b572-6367fa71fd77-kube-api-access-tjv76\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177678 27819 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177689 27819 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177702 27819 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177714 27819 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2510c5df-c68c-4f52-b572-6367fa71fd77-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177725 27819 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177736 27819 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177748 27819 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177782 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-dev" (OuterVolumeSpecName: "dev") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177812 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.177837 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-run" (OuterVolumeSpecName: "run") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:07.185128 master-0 kubenswrapper[27819]: I0319 09:50:07.181031 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65cfcee-1397-46cc-af85-67a07c3e325c-kube-api-access-nnqg6" (OuterVolumeSpecName: "kube-api-access-nnqg6") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "kube-api-access-nnqg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:07.185963 master-0 kubenswrapper[27819]: I0319 09:50:07.185366 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-scripts" (OuterVolumeSpecName: "scripts") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.190519 master-0 kubenswrapper[27819]: I0319 09:50:07.190428 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2510c5df-c68c-4f52-b572-6367fa71fd77" (UID: "2510c5df-c68c-4f52-b572-6367fa71fd77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.236719 master-0 kubenswrapper[27819]: I0319 09:50:07.236656 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.255382 master-0 kubenswrapper[27819]: I0319 09:50:07.254376 27819 generic.go:334] "Generic (PLEG): container finished" podID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerID="31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456" exitCode=0 Mar 19 09:50:07.255382 master-0 kubenswrapper[27819]: I0319 09:50:07.254475 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"a65cfcee-1397-46cc-af85-67a07c3e325c","Type":"ContainerDied","Data":"31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456"} Mar 19 09:50:07.255382 master-0 kubenswrapper[27819]: I0319 09:50:07.254530 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"a65cfcee-1397-46cc-af85-67a07c3e325c","Type":"ContainerDied","Data":"c30f77e8092274f53ac9f7b71061a9750b976ce8cb0a227613bdf020d51c373f"} Mar 19 09:50:07.255382 master-0 kubenswrapper[27819]: I0319 09:50:07.254580 27819 scope.go:117] "RemoveContainer" containerID="47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c" Mar 19 09:50:07.255382 master-0 kubenswrapper[27819]: I0319 09:50:07.255126 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.269014 master-0 kubenswrapper[27819]: I0319 09:50:07.268945 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"3b665618-cc45-40c7-88c5-563951c4ea1f","Type":"ContainerDied","Data":"f757957a18287dc36705c733975e1089e9ba07db73c123b1d1ff852e35a8ce43"} Mar 19 09:50:07.269111 master-0 kubenswrapper[27819]: I0319 09:50:07.269059 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.278968 master-0 kubenswrapper[27819]: I0319 09:50:07.278916 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnqg6\" (UniqueName: \"kubernetes.io/projected/a65cfcee-1397-46cc-af85-67a07c3e325c-kube-api-access-nnqg6\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.278968 master-0 kubenswrapper[27819]: I0319 09:50:07.278958 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.278968 master-0 kubenswrapper[27819]: I0319 09:50:07.278971 27819 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-dev\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.279228 master-0 kubenswrapper[27819]: I0319 09:50:07.278984 27819 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.279228 master-0 kubenswrapper[27819]: I0319 09:50:07.278996 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.279228 master-0 kubenswrapper[27819]: I0319 09:50:07.279007 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.280650 master-0 kubenswrapper[27819]: I0319 09:50:07.279017 27819 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a65cfcee-1397-46cc-af85-67a07c3e325c-run\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.283867 master-0 kubenswrapper[27819]: I0319 09:50:07.283789 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data" (OuterVolumeSpecName: "config-data") pod "2510c5df-c68c-4f52-b572-6367fa71fd77" (UID: "2510c5df-c68c-4f52-b572-6367fa71fd77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.290463 master-0 kubenswrapper[27819]: I0319 09:50:07.290404 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.321855 master-0 kubenswrapper[27819]: I0319 09:50:07.321290 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" path="/var/lib/kubelet/pods/5934b50c-8a57-4df0-83a5-a6cf7279d7f8/volumes" Mar 19 09:50:07.332211 master-0 kubenswrapper[27819]: I0319 09:50:07.332121 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"2510c5df-c68c-4f52-b572-6367fa71fd77","Type":"ContainerDied","Data":"c26ddc5d38ba3babe875e3098e6de7a21ce6d7942ef247fbb3e0dd19341354b6"} Mar 19 09:50:07.383135 master-0 kubenswrapper[27819]: I0319 09:50:07.382639 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2510c5df-c68c-4f52-b572-6367fa71fd77-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.390608 master-0 kubenswrapper[27819]: I0319 09:50:07.384663 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:50:07.390608 master-0 kubenswrapper[27819]: I0319 09:50:07.388736 27819 scope.go:117] "RemoveContainer" containerID="31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456" Mar 19 09:50:07.421052 master-0 kubenswrapper[27819]: I0319 09:50:07.420656 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:50:07.476406 master-0 kubenswrapper[27819]: I0319 09:50:07.476287 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:50:07.487341 master-0 kubenswrapper[27819]: I0319 09:50:07.487273 27819 scope.go:117] "RemoveContainer" containerID="47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c" Mar 19 09:50:07.499655 master-0 kubenswrapper[27819]: E0319 09:50:07.488255 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c\": container with ID starting with 47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c not found: ID does not exist" containerID="47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c" Mar 19 09:50:07.499655 master-0 kubenswrapper[27819]: I0319 09:50:07.488304 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c"} err="failed to get container status \"47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c\": rpc error: code = NotFound desc = could not find container \"47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c\": container with ID starting with 47f18a821459285b4ef3bc2a6a41260de6d1ef27266a3f589721df979e49603c not found: ID does not exist" Mar 19 09:50:07.499655 master-0 kubenswrapper[27819]: I0319 09:50:07.488335 27819 scope.go:117] "RemoveContainer" containerID="31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456" Mar 19 09:50:07.505478 master-0 kubenswrapper[27819]: E0319 09:50:07.504169 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456\": container with ID starting with 31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456 not found: ID does not exist" containerID="31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456" Mar 19 09:50:07.505478 master-0 kubenswrapper[27819]: I0319 09:50:07.504211 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456"} err="failed to get container status \"31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456\": rpc error: code = NotFound desc = could not find container \"31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456\": container with ID starting with 31cedb739dcb4fdf7457f6c018d2561fd03893d38c31723bbddb418799537456 not found: ID does not exist" Mar 19 09:50:07.505478 master-0 kubenswrapper[27819]: I0319 09:50:07.504237 27819 scope.go:117] "RemoveContainer" containerID="02a27e0081b779b16abdf2a07e87b88cc5763d470b13fc3784f80723f0430c9a" Mar 19 09:50:07.507657 master-0 kubenswrapper[27819]: I0319 09:50:07.507609 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data" (OuterVolumeSpecName: "config-data") pod "a65cfcee-1397-46cc-af85-67a07c3e325c" (UID: "a65cfcee-1397-46cc-af85-67a07c3e325c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:07.536525 master-0 kubenswrapper[27819]: I0319 09:50:07.536269 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:50:07.537016 master-0 kubenswrapper[27819]: E0319 09:50:07.536986 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="probe" Mar 19 09:50:07.537016 master-0 kubenswrapper[27819]: I0319 09:50:07.537012 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="probe" Mar 19 09:50:07.537101 master-0 kubenswrapper[27819]: E0319 09:50:07.537024 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerName="dnsmasq-dns" Mar 19 09:50:07.537101 master-0 kubenswrapper[27819]: I0319 09:50:07.537033 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerName="dnsmasq-dns" Mar 19 09:50:07.537101 master-0 kubenswrapper[27819]: E0319 09:50:07.537054 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="cinder-volume" Mar 19 09:50:07.537101 master-0 kubenswrapper[27819]: I0319 09:50:07.537061 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="cinder-volume" Mar 19 09:50:07.537101 master-0 kubenswrapper[27819]: E0319 09:50:07.537074 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="probe" Mar 19 09:50:07.537101 master-0 kubenswrapper[27819]: I0319 09:50:07.537083 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="probe" Mar 19 09:50:07.537101 master-0 kubenswrapper[27819]: E0319 09:50:07.537102 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="cinder-backup" Mar 19 09:50:07.537291 master-0 kubenswrapper[27819]: I0319 09:50:07.537108 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="cinder-backup" Mar 19 09:50:07.537291 master-0 kubenswrapper[27819]: E0319 09:50:07.537120 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="probe" Mar 19 09:50:07.537291 master-0 kubenswrapper[27819]: I0319 09:50:07.537126 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="probe" Mar 19 09:50:07.537291 master-0 kubenswrapper[27819]: E0319 09:50:07.537146 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerName="init" Mar 19 09:50:07.537291 master-0 kubenswrapper[27819]: I0319 09:50:07.537152 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerName="init" Mar 19 09:50:07.537291 master-0 kubenswrapper[27819]: E0319 09:50:07.537175 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="cinder-scheduler" Mar 19 09:50:07.537291 master-0 kubenswrapper[27819]: I0319 09:50:07.537182 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="cinder-scheduler" Mar 19 09:50:07.537791 master-0 kubenswrapper[27819]: I0319 09:50:07.537434 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="probe" Mar 19 09:50:07.537791 master-0 kubenswrapper[27819]: I0319 09:50:07.537451 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="probe" Mar 19 09:50:07.537791 master-0 kubenswrapper[27819]: I0319 09:50:07.537469 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="probe" Mar 19 09:50:07.537791 master-0 kubenswrapper[27819]: I0319 09:50:07.537479 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" containerName="cinder-backup" Mar 19 09:50:07.537791 master-0 kubenswrapper[27819]: I0319 09:50:07.537503 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" containerName="cinder-scheduler" Mar 19 09:50:07.537791 master-0 kubenswrapper[27819]: I0319 09:50:07.537509 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" containerName="cinder-volume" Mar 19 09:50:07.537791 master-0 kubenswrapper[27819]: I0319 09:50:07.537516 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="5934b50c-8a57-4df0-83a5-a6cf7279d7f8" containerName="dnsmasq-dns" Mar 19 09:50:07.538631 master-0 kubenswrapper[27819]: I0319 09:50:07.538612 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.541591 master-0 kubenswrapper[27819]: I0319 09:50:07.540893 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-volume-lvm-iscsi-config-data" Mar 19 09:50:07.547144 master-0 kubenswrapper[27819]: I0319 09:50:07.546992 27819 scope.go:117] "RemoveContainer" containerID="4bdf4d30e5c9feb5390d1af70e36c977732cf13d98296b3a877c1d8a88e60da2" Mar 19 09:50:07.577564 master-0 kubenswrapper[27819]: I0319 09:50:07.571390 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.588662 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-nvme\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589037 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-lib-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589083 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-iscsi\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589132 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-machine-id\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589165 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-run\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589186 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-lib-modules\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589226 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-dev\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589242 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-locks-brick\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589264 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-config-data-custom\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589288 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-scripts\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589667 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-sys\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589796 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-combined-ca-bundle\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589928 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs8w7\" (UniqueName: \"kubernetes.io/projected/db5a3772-4afd-478b-85d3-dd454056f3b9-kube-api-access-bs8w7\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.589979 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-config-data\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.590043 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-locks-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.599563 master-0 kubenswrapper[27819]: I0319 09:50:07.590114 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65cfcee-1397-46cc-af85-67a07c3e325c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:07.613112 master-0 kubenswrapper[27819]: I0319 09:50:07.613050 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:50:07.619715 master-0 kubenswrapper[27819]: I0319 09:50:07.616401 27819 scope.go:117] "RemoveContainer" containerID="81226ef575ecc13d89e1c7566d0981749357755277809d2a9f7a25027e9b68a7" Mar 19 09:50:07.637212 master-0 kubenswrapper[27819]: I0319 09:50:07.636700 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:50:07.639834 master-0 kubenswrapper[27819]: I0319 09:50:07.639027 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.642266 master-0 kubenswrapper[27819]: I0319 09:50:07.642068 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-scheduler-config-data" Mar 19 09:50:07.660055 master-0 kubenswrapper[27819]: I0319 09:50:07.659972 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:50:07.693677 master-0 kubenswrapper[27819]: I0319 09:50:07.693611 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:50:07.694363 master-0 kubenswrapper[27819]: I0319 09:50:07.694321 27819 scope.go:117] "RemoveContainer" containerID="dff42349c25d0f566f9088e42f49853ad4960376424175b12c0eb7abdbd9a7eb" Mar 19 09:50:07.695513 master-0 kubenswrapper[27819]: I0319 09:50:07.695467 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-dev\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.695572 master-0 kubenswrapper[27819]: I0319 09:50:07.695523 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-locks-brick\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.695612 master-0 kubenswrapper[27819]: I0319 09:50:07.695573 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-dev\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.695612 master-0 kubenswrapper[27819]: I0319 09:50:07.695572 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-combined-ca-bundle\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.695671 master-0 kubenswrapper[27819]: I0319 09:50:07.695649 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-config-data-custom\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.695671 master-0 kubenswrapper[27819]: I0319 09:50:07.695660 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-locks-brick\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.695741 master-0 kubenswrapper[27819]: I0319 09:50:07.695686 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb7ww\" (UniqueName: \"kubernetes.io/projected/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-kube-api-access-vb7ww\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.695741 master-0 kubenswrapper[27819]: I0319 09:50:07.695719 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-scripts\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.695828 master-0 kubenswrapper[27819]: I0319 09:50:07.695807 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-config-data\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.695908 master-0 kubenswrapper[27819]: I0319 09:50:07.695884 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-config-data-custom\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.695960 master-0 kubenswrapper[27819]: I0319 09:50:07.695915 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-sys\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696052 master-0 kubenswrapper[27819]: I0319 09:50:07.696032 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-combined-ca-bundle\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696098 master-0 kubenswrapper[27819]: I0319 09:50:07.696056 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs8w7\" (UniqueName: \"kubernetes.io/projected/db5a3772-4afd-478b-85d3-dd454056f3b9-kube-api-access-bs8w7\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696098 master-0 kubenswrapper[27819]: I0319 09:50:07.696080 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-config-data\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696157 master-0 kubenswrapper[27819]: I0319 09:50:07.696110 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-locks-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696157 master-0 kubenswrapper[27819]: I0319 09:50:07.696145 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-etc-machine-id\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.696221 master-0 kubenswrapper[27819]: I0319 09:50:07.696184 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-nvme\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696272 master-0 kubenswrapper[27819]: I0319 09:50:07.696252 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-scripts\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.696314 master-0 kubenswrapper[27819]: I0319 09:50:07.696276 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-lib-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696349 master-0 kubenswrapper[27819]: I0319 09:50:07.696333 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-iscsi\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696422 master-0 kubenswrapper[27819]: I0319 09:50:07.696405 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-machine-id\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696474 master-0 kubenswrapper[27819]: I0319 09:50:07.696460 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-run\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696506 master-0 kubenswrapper[27819]: I0319 09:50:07.696481 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-lib-modules\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.696635 master-0 kubenswrapper[27819]: I0319 09:50:07.696613 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-lib-modules\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.700844 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-sys\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.704907 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-machine-id\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.704919 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-nvme\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.704956 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-lib-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.704990 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-var-locks-cinder\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.705001 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-run\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.705024 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/db5a3772-4afd-478b-85d3-dd454056f3b9-etc-iscsi\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.705429 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-scripts\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.707053 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-combined-ca-bundle\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.710118 master-0 kubenswrapper[27819]: I0319 09:50:07.708754 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:50:07.726059 master-0 kubenswrapper[27819]: I0319 09:50:07.726005 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:50:07.728123 master-0 kubenswrapper[27819]: I0319 09:50:07.728045 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.733902 master-0 kubenswrapper[27819]: I0319 09:50:07.733861 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-backup-config-data" Mar 19 09:50:07.738532 master-0 kubenswrapper[27819]: I0319 09:50:07.738475 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-config-data-custom\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.749266 master-0 kubenswrapper[27819]: I0319 09:50:07.748010 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:50:07.749266 master-0 kubenswrapper[27819]: I0319 09:50:07.748727 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs8w7\" (UniqueName: \"kubernetes.io/projected/db5a3772-4afd-478b-85d3-dd454056f3b9-kube-api-access-bs8w7\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.749266 master-0 kubenswrapper[27819]: I0319 09:50:07.748831 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db5a3772-4afd-478b-85d3-dd454056f3b9-config-data\") pod \"cinder-255d6-volume-lvm-iscsi-0\" (UID: \"db5a3772-4afd-478b-85d3-dd454056f3b9\") " pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.800174 master-0 kubenswrapper[27819]: I0319 09:50:07.800106 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb7ww\" (UniqueName: \"kubernetes.io/projected/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-kube-api-access-vb7ww\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800209 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-dev\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800237 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-config-data\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800256 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-iscsi\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800294 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-combined-ca-bundle\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800317 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-locks-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800339 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-config-data\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800360 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-config-data-custom\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800390 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-run\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.800414 master-0 kubenswrapper[27819]: I0319 09:50:07.800409 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-locks-brick\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800452 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-etc-machine-id\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800512 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-scripts\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800533 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-lib-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800595 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-machine-id\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800621 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-sys\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800642 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-scripts\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800709 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-nvme\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800739 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-config-data-custom\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800774 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7sc\" (UniqueName: \"kubernetes.io/projected/fc3c3fd4-1650-4012-9488-cba497b6776e-kube-api-access-sr7sc\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800792 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-lib-modules\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.801124 master-0 kubenswrapper[27819]: I0319 09:50:07.800816 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-combined-ca-bundle\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.806345 master-0 kubenswrapper[27819]: I0319 09:50:07.806173 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-etc-machine-id\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.816777 master-0 kubenswrapper[27819]: I0319 09:50:07.816662 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-combined-ca-bundle\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.820450 master-0 kubenswrapper[27819]: I0319 09:50:07.820409 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-config-data-custom\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.826729 master-0 kubenswrapper[27819]: I0319 09:50:07.823083 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb7ww\" (UniqueName: \"kubernetes.io/projected/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-kube-api-access-vb7ww\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.827085 master-0 kubenswrapper[27819]: I0319 09:50:07.827010 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-scripts\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.827424 master-0 kubenswrapper[27819]: I0319 09:50:07.827388 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d035bd9e-47a7-4d0a-a754-b98bc99dd02b-config-data\") pod \"cinder-255d6-scheduler-0\" (UID: \"d035bd9e-47a7-4d0a-a754-b98bc99dd02b\") " pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:07.875823 master-0 kubenswrapper[27819]: I0319 09:50:07.875673 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:07.878860 master-0 kubenswrapper[27819]: I0319 09:50:07.878815 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-255d6-api-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.905937 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-lib-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906039 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-machine-id\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906077 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-sys\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906107 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-scripts\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906185 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-nvme\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906227 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-config-data-custom\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906265 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7sc\" (UniqueName: \"kubernetes.io/projected/fc3c3fd4-1650-4012-9488-cba497b6776e-kube-api-access-sr7sc\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906292 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-lib-modules\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906376 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-dev\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906408 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-iscsi\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906456 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-combined-ca-bundle\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906480 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-locks-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906510 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-config-data\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906570 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-run\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906597 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-locks-brick\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906772 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-locks-brick\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906845 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-lib-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906884 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-machine-id\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.907654 master-0 kubenswrapper[27819]: I0319 09:50:07.906917 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-sys\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.910879 master-0 kubenswrapper[27819]: I0319 09:50:07.908060 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-nvme\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.910879 master-0 kubenswrapper[27819]: I0319 09:50:07.909947 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-lib-modules\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.910879 master-0 kubenswrapper[27819]: I0319 09:50:07.910003 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-dev\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.910879 master-0 kubenswrapper[27819]: I0319 09:50:07.910033 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-etc-iscsi\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.910879 master-0 kubenswrapper[27819]: I0319 09:50:07.910687 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-run\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.910879 master-0 kubenswrapper[27819]: I0319 09:50:07.910783 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc3c3fd4-1650-4012-9488-cba497b6776e-var-locks-cinder\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.911129 master-0 kubenswrapper[27819]: I0319 09:50:07.911088 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-scripts\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.916668 master-0 kubenswrapper[27819]: I0319 09:50:07.911774 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-config-data-custom\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.916668 master-0 kubenswrapper[27819]: I0319 09:50:07.914851 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-config-data\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.949596 master-0 kubenswrapper[27819]: I0319 09:50:07.940359 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7sc\" (UniqueName: \"kubernetes.io/projected/fc3c3fd4-1650-4012-9488-cba497b6776e-kube-api-access-sr7sc\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:07.949596 master-0 kubenswrapper[27819]: I0319 09:50:07.949102 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc3c3fd4-1650-4012-9488-cba497b6776e-combined-ca-bundle\") pod \"cinder-255d6-backup-0\" (UID: \"fc3c3fd4-1650-4012-9488-cba497b6776e\") " pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:08.012993 master-0 kubenswrapper[27819]: I0319 09:50:08.012635 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:08.062992 master-0 kubenswrapper[27819]: I0319 09:50:08.062468 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:08.525834 master-0 kubenswrapper[27819]: I0319 09:50:08.519946 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-volume-lvm-iscsi-0"] Mar 19 09:50:08.670001 master-0 kubenswrapper[27819]: I0319 09:50:08.669897 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-scheduler-0"] Mar 19 09:50:08.724120 master-0 kubenswrapper[27819]: W0319 09:50:08.724062 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc3c3fd4_1650_4012_9488_cba497b6776e.slice/crio-5df89184d736ac7e7994363c0570dde95580c51ee0c243b377176a5969cb44ef WatchSource:0}: Error finding container 5df89184d736ac7e7994363c0570dde95580c51ee0c243b377176a5969cb44ef: Status 404 returned error can't find the container with id 5df89184d736ac7e7994363c0570dde95580c51ee0c243b377176a5969cb44ef Mar 19 09:50:08.726869 master-0 kubenswrapper[27819]: I0319 09:50:08.726493 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-backup-0"] Mar 19 09:50:09.335458 master-0 kubenswrapper[27819]: I0319 09:50:09.335400 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2510c5df-c68c-4f52-b572-6367fa71fd77" path="/var/lib/kubelet/pods/2510c5df-c68c-4f52-b572-6367fa71fd77/volumes" Mar 19 09:50:09.336121 master-0 kubenswrapper[27819]: I0319 09:50:09.336091 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b665618-cc45-40c7-88c5-563951c4ea1f" path="/var/lib/kubelet/pods/3b665618-cc45-40c7-88c5-563951c4ea1f/volumes" Mar 19 09:50:09.337043 master-0 kubenswrapper[27819]: I0319 09:50:09.337021 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65cfcee-1397-46cc-af85-67a07c3e325c" path="/var/lib/kubelet/pods/a65cfcee-1397-46cc-af85-67a07c3e325c/volumes" Mar 19 09:50:09.406677 master-0 kubenswrapper[27819]: I0319 09:50:09.406627 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"db5a3772-4afd-478b-85d3-dd454056f3b9","Type":"ContainerStarted","Data":"37f15bed58477898871a1eecca1d2b2d15baef6fd04212bcc9c7c473fe158127"} Mar 19 09:50:09.406888 master-0 kubenswrapper[27819]: I0319 09:50:09.406680 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"db5a3772-4afd-478b-85d3-dd454056f3b9","Type":"ContainerStarted","Data":"2ab251b7d52a125522d5b14cf2fd9310a1a64d3ad132cda116a0769c2aa29eb5"} Mar 19 09:50:09.412582 master-0 kubenswrapper[27819]: I0319 09:50:09.411795 27819 generic.go:334] "Generic (PLEG): container finished" podID="23d034a2-6b7a-41f4-904d-f333f1ca8605" containerID="b5c48c7fbdde94ca974b5f96e8471c98a7ccfd64b116eff8a658403787596e23" exitCode=0 Mar 19 09:50:09.412582 master-0 kubenswrapper[27819]: I0319 09:50:09.411918 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-975s6" event={"ID":"23d034a2-6b7a-41f4-904d-f333f1ca8605","Type":"ContainerDied","Data":"b5c48c7fbdde94ca974b5f96e8471c98a7ccfd64b116eff8a658403787596e23"} Mar 19 09:50:09.422597 master-0 kubenswrapper[27819]: I0319 09:50:09.422448 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"fc3c3fd4-1650-4012-9488-cba497b6776e","Type":"ContainerStarted","Data":"48a733ad4e71b35ecfc37b6249761c0b1fd2fcab223b4a02fabb5366e396eea7"} Mar 19 09:50:09.422597 master-0 kubenswrapper[27819]: I0319 09:50:09.422495 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"fc3c3fd4-1650-4012-9488-cba497b6776e","Type":"ContainerStarted","Data":"5df89184d736ac7e7994363c0570dde95580c51ee0c243b377176a5969cb44ef"} Mar 19 09:50:09.424188 master-0 kubenswrapper[27819]: I0319 09:50:09.424152 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"d035bd9e-47a7-4d0a-a754-b98bc99dd02b","Type":"ContainerStarted","Data":"601169c6309257e1a1618273318e5b7b251742490fcec60029cc05a931464cbf"} Mar 19 09:50:10.438839 master-0 kubenswrapper[27819]: I0319 09:50:10.437756 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"db5a3772-4afd-478b-85d3-dd454056f3b9","Type":"ContainerStarted","Data":"82c02e8982b8628e0d7cc8ff5d0954e1b1066a07adba9336efcffb6e834177c5"} Mar 19 09:50:10.442643 master-0 kubenswrapper[27819]: I0319 09:50:10.442574 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"fc3c3fd4-1650-4012-9488-cba497b6776e","Type":"ContainerStarted","Data":"1807cbbe1a998fd873aae7b0fe97b49c881a609e67a45d59e7e89111d3dfa799"} Mar 19 09:50:10.455061 master-0 kubenswrapper[27819]: I0319 09:50:10.454987 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"d035bd9e-47a7-4d0a-a754-b98bc99dd02b","Type":"ContainerStarted","Data":"a6a69c774ccf2c1f843576e6258f5d70fc7a501a1840be52fcf238c4d496d0a5"} Mar 19 09:50:10.478562 master-0 kubenswrapper[27819]: I0319 09:50:10.476842 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podStartSLOduration=3.4768227400000002 podStartE2EDuration="3.47682274s" podCreationTimestamp="2026-03-19 09:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:10.475331552 +0000 UTC m=+995.396909264" watchObservedRunningTime="2026-03-19 09:50:10.47682274 +0000 UTC m=+995.398400432" Mar 19 09:50:10.533810 master-0 kubenswrapper[27819]: I0319 09:50:10.533714 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-backup-0" podStartSLOduration=3.5336881140000003 podStartE2EDuration="3.533688114s" podCreationTimestamp="2026-03-19 09:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:10.526382704 +0000 UTC m=+995.447960406" watchObservedRunningTime="2026-03-19 09:50:10.533688114 +0000 UTC m=+995.455265806" Mar 19 09:50:11.066449 master-0 kubenswrapper[27819]: I0319 09:50:11.063894 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-975s6" Mar 19 09:50:11.135569 master-0 kubenswrapper[27819]: I0319 09:50:11.134938 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-scripts\") pod \"23d034a2-6b7a-41f4-904d-f333f1ca8605\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " Mar 19 09:50:11.135569 master-0 kubenswrapper[27819]: I0319 09:50:11.135026 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23d034a2-6b7a-41f4-904d-f333f1ca8605-etc-podinfo\") pod \"23d034a2-6b7a-41f4-904d-f333f1ca8605\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " Mar 19 09:50:11.135569 master-0 kubenswrapper[27819]: I0319 09:50:11.135086 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data-merged\") pod \"23d034a2-6b7a-41f4-904d-f333f1ca8605\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " Mar 19 09:50:11.135569 master-0 kubenswrapper[27819]: I0319 09:50:11.135153 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znjsc\" (UniqueName: \"kubernetes.io/projected/23d034a2-6b7a-41f4-904d-f333f1ca8605-kube-api-access-znjsc\") pod \"23d034a2-6b7a-41f4-904d-f333f1ca8605\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " Mar 19 09:50:11.135569 master-0 kubenswrapper[27819]: I0319 09:50:11.135240 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data\") pod \"23d034a2-6b7a-41f4-904d-f333f1ca8605\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " Mar 19 09:50:11.135569 master-0 kubenswrapper[27819]: I0319 09:50:11.135260 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-combined-ca-bundle\") pod \"23d034a2-6b7a-41f4-904d-f333f1ca8605\" (UID: \"23d034a2-6b7a-41f4-904d-f333f1ca8605\") " Mar 19 09:50:11.137240 master-0 kubenswrapper[27819]: I0319 09:50:11.136376 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "23d034a2-6b7a-41f4-904d-f333f1ca8605" (UID: "23d034a2-6b7a-41f4-904d-f333f1ca8605"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:11.149003 master-0 kubenswrapper[27819]: I0319 09:50:11.148913 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/23d034a2-6b7a-41f4-904d-f333f1ca8605-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "23d034a2-6b7a-41f4-904d-f333f1ca8605" (UID: "23d034a2-6b7a-41f4-904d-f333f1ca8605"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 09:50:11.157827 master-0 kubenswrapper[27819]: I0319 09:50:11.156233 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d034a2-6b7a-41f4-904d-f333f1ca8605-kube-api-access-znjsc" (OuterVolumeSpecName: "kube-api-access-znjsc") pod "23d034a2-6b7a-41f4-904d-f333f1ca8605" (UID: "23d034a2-6b7a-41f4-904d-f333f1ca8605"). InnerVolumeSpecName "kube-api-access-znjsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:11.165531 master-0 kubenswrapper[27819]: I0319 09:50:11.163781 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-scripts" (OuterVolumeSpecName: "scripts") pod "23d034a2-6b7a-41f4-904d-f333f1ca8605" (UID: "23d034a2-6b7a-41f4-904d-f333f1ca8605"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:11.228744 master-0 kubenswrapper[27819]: I0319 09:50:11.228605 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data" (OuterVolumeSpecName: "config-data") pod "23d034a2-6b7a-41f4-904d-f333f1ca8605" (UID: "23d034a2-6b7a-41f4-904d-f333f1ca8605"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:11.255568 master-0 kubenswrapper[27819]: I0319 09:50:11.242488 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:11.255568 master-0 kubenswrapper[27819]: I0319 09:50:11.242529 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:11.255568 master-0 kubenswrapper[27819]: I0319 09:50:11.242538 27819 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/23d034a2-6b7a-41f4-904d-f333f1ca8605-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:11.255568 master-0 kubenswrapper[27819]: I0319 09:50:11.242561 27819 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/23d034a2-6b7a-41f4-904d-f333f1ca8605-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:11.255568 master-0 kubenswrapper[27819]: I0319 09:50:11.242572 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znjsc\" (UniqueName: \"kubernetes.io/projected/23d034a2-6b7a-41f4-904d-f333f1ca8605-kube-api-access-znjsc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:11.302568 master-0 kubenswrapper[27819]: I0319 09:50:11.298815 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23d034a2-6b7a-41f4-904d-f333f1ca8605" (UID: "23d034a2-6b7a-41f4-904d-f333f1ca8605"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:11.353026 master-0 kubenswrapper[27819]: I0319 09:50:11.352970 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d034a2-6b7a-41f4-904d-f333f1ca8605-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:11.480380 master-0 kubenswrapper[27819]: I0319 09:50:11.480235 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"d035bd9e-47a7-4d0a-a754-b98bc99dd02b","Type":"ContainerStarted","Data":"033f2c530326d5751e23dfc18c1da337febf12e7475392d4523daed7a9c96f4c"} Mar 19 09:50:11.485628 master-0 kubenswrapper[27819]: I0319 09:50:11.485350 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-975s6" Mar 19 09:50:11.485858 master-0 kubenswrapper[27819]: I0319 09:50:11.485682 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-975s6" event={"ID":"23d034a2-6b7a-41f4-904d-f333f1ca8605","Type":"ContainerDied","Data":"4922d0b4201601abe6162ea9d144612590b5be1fec90ccaf4724de7233de8452"} Mar 19 09:50:11.485858 master-0 kubenswrapper[27819]: I0319 09:50:11.485747 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4922d0b4201601abe6162ea9d144612590b5be1fec90ccaf4724de7233de8452" Mar 19 09:50:11.535881 master-0 kubenswrapper[27819]: I0319 09:50:11.533987 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-scheduler-0" podStartSLOduration=4.533962605 podStartE2EDuration="4.533962605s" podCreationTimestamp="2026-03-19 09:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:11.510169088 +0000 UTC m=+996.431746790" watchObservedRunningTime="2026-03-19 09:50:11.533962605 +0000 UTC m=+996.455540287" Mar 19 09:50:11.997684 master-0 kubenswrapper[27819]: I0319 09:50:11.996308 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-mf842"] Mar 19 09:50:12.003404 master-0 kubenswrapper[27819]: E0319 09:50:12.003324 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d034a2-6b7a-41f4-904d-f333f1ca8605" containerName="init" Mar 19 09:50:12.003404 master-0 kubenswrapper[27819]: I0319 09:50:12.003370 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d034a2-6b7a-41f4-904d-f333f1ca8605" containerName="init" Mar 19 09:50:12.003562 master-0 kubenswrapper[27819]: E0319 09:50:12.003456 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d034a2-6b7a-41f4-904d-f333f1ca8605" containerName="ironic-db-sync" Mar 19 09:50:12.003562 master-0 kubenswrapper[27819]: I0319 09:50:12.003468 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d034a2-6b7a-41f4-904d-f333f1ca8605" containerName="ironic-db-sync" Mar 19 09:50:12.003787 master-0 kubenswrapper[27819]: I0319 09:50:12.003748 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d034a2-6b7a-41f4-904d-f333f1ca8605" containerName="ironic-db-sync" Mar 19 09:50:12.032662 master-0 kubenswrapper[27819]: I0319 09:50:12.029733 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:12.060566 master-0 kubenswrapper[27819]: I0319 09:50:12.053340 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-mf842"] Mar 19 09:50:12.183067 master-0 kubenswrapper[27819]: I0319 09:50:12.182997 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-operator-scripts\") pod \"ironic-inspector-db-create-mf842\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:12.183422 master-0 kubenswrapper[27819]: I0319 09:50:12.183396 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n559n\" (UniqueName: \"kubernetes.io/projected/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-kube-api-access-n559n\") pod \"ironic-inspector-db-create-mf842\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:12.227786 master-0 kubenswrapper[27819]: I0319 09:50:12.212951 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-fd91-account-create-update-2vq5k"] Mar 19 09:50:12.227786 master-0 kubenswrapper[27819]: I0319 09:50:12.214401 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:12.227786 master-0 kubenswrapper[27819]: I0319 09:50:12.217210 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 19 09:50:12.309939 master-0 kubenswrapper[27819]: I0319 09:50:12.288084 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n559n\" (UniqueName: \"kubernetes.io/projected/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-kube-api-access-n559n\") pod \"ironic-inspector-db-create-mf842\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:12.309939 master-0 kubenswrapper[27819]: I0319 09:50:12.288186 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-operator-scripts\") pod \"ironic-inspector-db-create-mf842\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:12.309939 master-0 kubenswrapper[27819]: I0319 09:50:12.308170 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-operator-scripts\") pod \"ironic-inspector-db-create-mf842\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:12.342573 master-0 kubenswrapper[27819]: I0319 09:50:12.334902 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-fd91-account-create-update-2vq5k"] Mar 19 09:50:12.390794 master-0 kubenswrapper[27819]: I0319 09:50:12.390739 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg4h8\" (UniqueName: \"kubernetes.io/projected/87b796c7-2fa1-405c-b2b4-5bccee70d82b-kube-api-access-dg4h8\") pod \"ironic-inspector-fd91-account-create-update-2vq5k\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:12.390794 master-0 kubenswrapper[27819]: I0319 09:50:12.390795 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b796c7-2fa1-405c-b2b4-5bccee70d82b-operator-scripts\") pod \"ironic-inspector-fd91-account-create-update-2vq5k\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:12.416572 master-0 kubenswrapper[27819]: I0319 09:50:12.399370 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n559n\" (UniqueName: \"kubernetes.io/projected/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-kube-api-access-n559n\") pod \"ironic-inspector-db-create-mf842\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:12.416572 master-0 kubenswrapper[27819]: I0319 09:50:12.411197 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-78c8dcbbcd-c8rst"] Mar 19 09:50:12.439073 master-0 kubenswrapper[27819]: I0319 09:50:12.428576 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:12.439073 master-0 kubenswrapper[27819]: I0319 09:50:12.432107 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 19 09:50:12.439073 master-0 kubenswrapper[27819]: I0319 09:50:12.437634 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-78c8dcbbcd-c8rst"] Mar 19 09:50:12.466253 master-0 kubenswrapper[27819]: I0319 09:50:12.466219 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-756466c69-85qsr"] Mar 19 09:50:12.471132 master-0 kubenswrapper[27819]: I0319 09:50:12.469491 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:12.499553 master-0 kubenswrapper[27819]: I0319 09:50:12.476464 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756466c69-85qsr"] Mar 19 09:50:12.499553 master-0 kubenswrapper[27819]: I0319 09:50:12.491748 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg4h8\" (UniqueName: \"kubernetes.io/projected/87b796c7-2fa1-405c-b2b4-5bccee70d82b-kube-api-access-dg4h8\") pod \"ironic-inspector-fd91-account-create-update-2vq5k\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:12.499553 master-0 kubenswrapper[27819]: I0319 09:50:12.491788 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b796c7-2fa1-405c-b2b4-5bccee70d82b-operator-scripts\") pod \"ironic-inspector-fd91-account-create-update-2vq5k\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:12.499553 master-0 kubenswrapper[27819]: I0319 09:50:12.492527 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b796c7-2fa1-405c-b2b4-5bccee70d82b-operator-scripts\") pod \"ironic-inspector-fd91-account-create-update-2vq5k\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:13.145795 master-0 kubenswrapper[27819]: I0319 09:50:13.142282 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:13.150949 master-0 kubenswrapper[27819]: I0319 09:50:13.149122 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:13.159083 master-0 kubenswrapper[27819]: I0319 09:50:13.159037 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:13.160458 master-0 kubenswrapper[27819]: I0319 09:50:13.160413 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-config\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.160586 master-0 kubenswrapper[27819]: I0319 09:50:13.160476 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpg5b\" (UniqueName: \"kubernetes.io/projected/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-kube-api-access-kpg5b\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.161222 master-0 kubenswrapper[27819]: I0319 09:50:13.160762 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-combined-ca-bundle\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.161222 master-0 kubenswrapper[27819]: I0319 09:50:13.160814 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2sq\" (UniqueName: \"kubernetes.io/projected/7faee39a-b070-48c9-afed-6de955551889-kube-api-access-4r2sq\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.161222 master-0 kubenswrapper[27819]: I0319 09:50:13.160925 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-nb\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.161222 master-0 kubenswrapper[27819]: I0319 09:50:13.161133 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-swift-storage-0\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.161222 master-0 kubenswrapper[27819]: I0319 09:50:13.161158 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-svc\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.161597 master-0 kubenswrapper[27819]: I0319 09:50:13.161247 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-sb\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.161597 master-0 kubenswrapper[27819]: I0319 09:50:13.161329 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-config\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.165403 master-0 kubenswrapper[27819]: I0319 09:50:13.165348 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:13.216575 master-0 kubenswrapper[27819]: I0319 09:50:13.210626 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg4h8\" (UniqueName: \"kubernetes.io/projected/87b796c7-2fa1-405c-b2b4-5bccee70d82b-kube-api-access-dg4h8\") pod \"ironic-inspector-fd91-account-create-update-2vq5k\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270214 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-swift-storage-0\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270287 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-svc\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270365 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-sb\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270471 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-config\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270618 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-config\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270663 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpg5b\" (UniqueName: \"kubernetes.io/projected/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-kube-api-access-kpg5b\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270756 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-combined-ca-bundle\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270787 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r2sq\" (UniqueName: \"kubernetes.io/projected/7faee39a-b070-48c9-afed-6de955551889-kube-api-access-4r2sq\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.272988 master-0 kubenswrapper[27819]: I0319 09:50:13.270869 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-nb\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.308142 master-0 kubenswrapper[27819]: I0319 09:50:13.275367 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-swift-storage-0\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.308142 master-0 kubenswrapper[27819]: I0319 09:50:13.275657 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-svc\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.308142 master-0 kubenswrapper[27819]: I0319 09:50:13.279341 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-sb\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.308142 master-0 kubenswrapper[27819]: I0319 09:50:13.280284 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-config\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.308142 master-0 kubenswrapper[27819]: I0319 09:50:13.284069 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-nb\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.308142 master-0 kubenswrapper[27819]: I0319 09:50:13.290451 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-combined-ca-bundle\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.327663 master-0 kubenswrapper[27819]: I0319 09:50:13.324090 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpg5b\" (UniqueName: \"kubernetes.io/projected/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-kube-api-access-kpg5b\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.327663 master-0 kubenswrapper[27819]: I0319 09:50:13.325636 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4878f70b-b6db-4fbf-969d-3bb08df3d2bf-config\") pod \"ironic-neutron-agent-78c8dcbbcd-c8rst\" (UID: \"4878f70b-b6db-4fbf-969d-3bb08df3d2bf\") " pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.388568 master-0 kubenswrapper[27819]: I0319 09:50:13.341600 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-657645bc55-77bfd"] Mar 19 09:50:13.388568 master-0 kubenswrapper[27819]: I0319 09:50:13.353379 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r2sq\" (UniqueName: \"kubernetes.io/projected/7faee39a-b070-48c9-afed-6de955551889-kube-api-access-4r2sq\") pod \"dnsmasq-dns-756466c69-85qsr\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.388568 master-0 kubenswrapper[27819]: I0319 09:50:13.354935 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.388568 master-0 kubenswrapper[27819]: I0319 09:50:13.366673 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:13.388568 master-0 kubenswrapper[27819]: I0319 09:50:13.370532 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 19 09:50:13.388568 master-0 kubenswrapper[27819]: I0319 09:50:13.370785 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 19 09:50:13.393724 master-0 kubenswrapper[27819]: I0319 09:50:13.393683 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:13.404782 master-0 kubenswrapper[27819]: I0319 09:50:13.396603 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 19 09:50:13.409565 master-0 kubenswrapper[27819]: I0319 09:50:13.397133 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 19 09:50:13.409565 master-0 kubenswrapper[27819]: I0319 09:50:13.397173 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 19 09:50:13.438614 master-0 kubenswrapper[27819]: I0319 09:50:13.438132 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-657645bc55-77bfd"] Mar 19 09:50:13.463722 master-0 kubenswrapper[27819]: I0319 09:50:13.463639 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:13.498022 master-0 kubenswrapper[27819]: I0319 09:50:13.497968 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-custom\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.498123 master-0 kubenswrapper[27819]: I0319 09:50:13.498046 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-combined-ca-bundle\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.498161 master-0 kubenswrapper[27819]: I0319 09:50:13.498134 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-etc-podinfo\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.498200 master-0 kubenswrapper[27819]: I0319 09:50:13.498161 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-scripts\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.498341 master-0 kubenswrapper[27819]: I0319 09:50:13.498315 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqkqb\" (UniqueName: \"kubernetes.io/projected/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-kube-api-access-mqkqb\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.519205 master-0 kubenswrapper[27819]: I0319 09:50:13.516956 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-logs\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.519205 master-0 kubenswrapper[27819]: I0319 09:50:13.517029 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.519205 master-0 kubenswrapper[27819]: I0319 09:50:13.517101 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-merged\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.621857 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-etc-podinfo\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.621917 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-scripts\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.621994 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqkqb\" (UniqueName: \"kubernetes.io/projected/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-kube-api-access-mqkqb\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.622158 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-logs\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.622194 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.622247 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-merged\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.622284 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-custom\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.622310 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-combined-ca-bundle\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.633455 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-logs\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638088 master-0 kubenswrapper[27819]: I0319 09:50:13.633803 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-merged\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.638690 master-0 kubenswrapper[27819]: I0319 09:50:13.638429 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-combined-ca-bundle\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.640275 master-0 kubenswrapper[27819]: I0319 09:50:13.640246 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-scripts\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.655072 master-0 kubenswrapper[27819]: I0319 09:50:13.646912 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-etc-podinfo\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.655072 master-0 kubenswrapper[27819]: I0319 09:50:13.647537 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.686099 master-0 kubenswrapper[27819]: I0319 09:50:13.686062 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-custom\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.703068 master-0 kubenswrapper[27819]: I0319 09:50:13.693311 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqkqb\" (UniqueName: \"kubernetes.io/projected/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-kube-api-access-mqkqb\") pod \"ironic-657645bc55-77bfd\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.761331 master-0 kubenswrapper[27819]: I0319 09:50:13.749060 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:13.823606 master-0 kubenswrapper[27819]: I0319 09:50:13.821829 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-mf842"] Mar 19 09:50:14.327592 master-0 kubenswrapper[27819]: I0319 09:50:14.327469 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756466c69-85qsr"] Mar 19 09:50:14.557841 master-0 kubenswrapper[27819]: I0319 09:50:14.557768 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-fd91-account-create-update-2vq5k"] Mar 19 09:50:14.628453 master-0 kubenswrapper[27819]: W0319 09:50:14.628171 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b796c7_2fa1_405c_b2b4_5bccee70d82b.slice/crio-618371f14c2ddb9b8557b631fe588cfd57626677fee50805cbd6f2bd8d2e62a3 WatchSource:0}: Error finding container 618371f14c2ddb9b8557b631fe588cfd57626677fee50805cbd6f2bd8d2e62a3: Status 404 returned error can't find the container with id 618371f14c2ddb9b8557b631fe588cfd57626677fee50805cbd6f2bd8d2e62a3 Mar 19 09:50:14.663337 master-0 kubenswrapper[27819]: I0319 09:50:14.663246 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-78c8dcbbcd-c8rst"] Mar 19 09:50:14.674583 master-0 kubenswrapper[27819]: I0319 09:50:14.673905 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756466c69-85qsr" event={"ID":"7faee39a-b070-48c9-afed-6de955551889","Type":"ContainerStarted","Data":"4f82cbc7a006a5d4a25b5b2f831e6da45f4356f70821a501b44232ff62d25231"} Mar 19 09:50:14.680621 master-0 kubenswrapper[27819]: I0319 09:50:14.680437 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mf842" event={"ID":"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83","Type":"ContainerStarted","Data":"0d072340e9c5d8c8a8ada8340eaf2e2cc57090a8105c9a970ee4899ea981242b"} Mar 19 09:50:14.680621 master-0 kubenswrapper[27819]: I0319 09:50:14.680522 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mf842" event={"ID":"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83","Type":"ContainerStarted","Data":"8737bcc20c4096bc99265a8b671721d82cf89772901d4257e50bd2cb594839b3"} Mar 19 09:50:14.765038 master-0 kubenswrapper[27819]: W0319 09:50:14.764955 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod790a7fb4_f06a_47cf_ae30_70dfe7197f5e.slice/crio-8ef062c13c254d9d73daced74a6e84372eaceae7c1a85f66dab201ff27a1ac7e WatchSource:0}: Error finding container 8ef062c13c254d9d73daced74a6e84372eaceae7c1a85f66dab201ff27a1ac7e: Status 404 returned error can't find the container with id 8ef062c13c254d9d73daced74a6e84372eaceae7c1a85f66dab201ff27a1ac7e Mar 19 09:50:14.818225 master-0 kubenswrapper[27819]: I0319 09:50:14.818149 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-657645bc55-77bfd"] Mar 19 09:50:15.708904 master-0 kubenswrapper[27819]: I0319 09:50:15.708766 27819 generic.go:334] "Generic (PLEG): container finished" podID="87b796c7-2fa1-405c-b2b4-5bccee70d82b" containerID="88335740d680312e38e2c34d8364c0a2784671f1b0560db4a354b24ee3313ab1" exitCode=0 Mar 19 09:50:15.708904 master-0 kubenswrapper[27819]: I0319 09:50:15.708852 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" event={"ID":"87b796c7-2fa1-405c-b2b4-5bccee70d82b","Type":"ContainerDied","Data":"88335740d680312e38e2c34d8364c0a2784671f1b0560db4a354b24ee3313ab1"} Mar 19 09:50:15.708904 master-0 kubenswrapper[27819]: I0319 09:50:15.708883 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" event={"ID":"87b796c7-2fa1-405c-b2b4-5bccee70d82b","Type":"ContainerStarted","Data":"618371f14c2ddb9b8557b631fe588cfd57626677fee50805cbd6f2bd8d2e62a3"} Mar 19 09:50:15.714058 master-0 kubenswrapper[27819]: I0319 09:50:15.713989 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerStarted","Data":"8ef062c13c254d9d73daced74a6e84372eaceae7c1a85f66dab201ff27a1ac7e"} Mar 19 09:50:15.719733 master-0 kubenswrapper[27819]: I0319 09:50:15.719657 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mf842" event={"ID":"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83","Type":"ContainerDied","Data":"0d072340e9c5d8c8a8ada8340eaf2e2cc57090a8105c9a970ee4899ea981242b"} Mar 19 09:50:15.722489 master-0 kubenswrapper[27819]: I0319 09:50:15.719516 27819 generic.go:334] "Generic (PLEG): container finished" podID="3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83" containerID="0d072340e9c5d8c8a8ada8340eaf2e2cc57090a8105c9a970ee4899ea981242b" exitCode=0 Mar 19 09:50:15.725565 master-0 kubenswrapper[27819]: I0319 09:50:15.725499 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" event={"ID":"4878f70b-b6db-4fbf-969d-3bb08df3d2bf","Type":"ContainerStarted","Data":"e18f481c09c7d7a30580a9b3eb8f34e936e1056181332280799e872a56d0642f"} Mar 19 09:50:15.727860 master-0 kubenswrapper[27819]: I0319 09:50:15.727814 27819 generic.go:334] "Generic (PLEG): container finished" podID="7faee39a-b070-48c9-afed-6de955551889" containerID="71e547e34a16b8f7749b8445f6383128b46112b8d66ab6b461f4b68faaa61811" exitCode=0 Mar 19 09:50:15.728028 master-0 kubenswrapper[27819]: I0319 09:50:15.727868 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756466c69-85qsr" event={"ID":"7faee39a-b070-48c9-afed-6de955551889","Type":"ContainerDied","Data":"71e547e34a16b8f7749b8445f6383128b46112b8d66ab6b461f4b68faaa61811"} Mar 19 09:50:16.266723 master-0 kubenswrapper[27819]: I0319 09:50:16.263088 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:16.307536 master-0 kubenswrapper[27819]: I0319 09:50:16.306332 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 19 09:50:16.307536 master-0 kubenswrapper[27819]: E0319 09:50:16.307265 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83" containerName="mariadb-database-create" Mar 19 09:50:16.307536 master-0 kubenswrapper[27819]: I0319 09:50:16.307284 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83" containerName="mariadb-database-create" Mar 19 09:50:16.308016 master-0 kubenswrapper[27819]: I0319 09:50:16.307679 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83" containerName="mariadb-database-create" Mar 19 09:50:16.311578 master-0 kubenswrapper[27819]: I0319 09:50:16.311037 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 19 09:50:16.316596 master-0 kubenswrapper[27819]: I0319 09:50:16.313908 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 19 09:50:16.316596 master-0 kubenswrapper[27819]: I0319 09:50:16.314698 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 19 09:50:16.403745 master-0 kubenswrapper[27819]: I0319 09:50:16.399719 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 19 09:50:16.464882 master-0 kubenswrapper[27819]: I0319 09:50:16.463868 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n559n\" (UniqueName: \"kubernetes.io/projected/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-kube-api-access-n559n\") pod \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " Mar 19 09:50:16.464882 master-0 kubenswrapper[27819]: I0319 09:50:16.463997 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-operator-scripts\") pod \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\" (UID: \"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83\") " Mar 19 09:50:16.464882 master-0 kubenswrapper[27819]: I0319 09:50:16.464494 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83" (UID: "3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:16.470578 master-0 kubenswrapper[27819]: I0319 09:50:16.465515 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.470578 master-0 kubenswrapper[27819]: I0319 09:50:16.465760 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca78928f-b0d4-4090-acba-66e98b7d312d-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.470578 master-0 kubenswrapper[27819]: I0319 09:50:16.465818 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f29ae85-0c13-4e76-8b24-1c4fa0297661\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7d4225f0-f7d6-4624-89a5-82c14ec2a0d9\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.470578 master-0 kubenswrapper[27819]: I0319 09:50:16.466038 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.470578 master-0 kubenswrapper[27819]: I0319 09:50:16.466104 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.473931 master-0 kubenswrapper[27819]: I0319 09:50:16.471136 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnbzk\" (UniqueName: \"kubernetes.io/projected/ca78928f-b0d4-4090-acba-66e98b7d312d-kube-api-access-mnbzk\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.473931 master-0 kubenswrapper[27819]: I0319 09:50:16.471351 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-scripts\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.473931 master-0 kubenswrapper[27819]: I0319 09:50:16.471415 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.473931 master-0 kubenswrapper[27819]: I0319 09:50:16.471846 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:16.477366 master-0 kubenswrapper[27819]: I0319 09:50:16.477240 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-kube-api-access-n559n" (OuterVolumeSpecName: "kube-api-access-n559n") pod "3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83" (UID: "3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83"). InnerVolumeSpecName "kube-api-access-n559n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:16.574774 master-0 kubenswrapper[27819]: I0319 09:50:16.574614 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca78928f-b0d4-4090-acba-66e98b7d312d-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.574774 master-0 kubenswrapper[27819]: I0319 09:50:16.574697 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f29ae85-0c13-4e76-8b24-1c4fa0297661\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7d4225f0-f7d6-4624-89a5-82c14ec2a0d9\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.575096 master-0 kubenswrapper[27819]: I0319 09:50:16.575034 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.575143 master-0 kubenswrapper[27819]: I0319 09:50:16.575104 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.575245 master-0 kubenswrapper[27819]: I0319 09:50:16.575218 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnbzk\" (UniqueName: \"kubernetes.io/projected/ca78928f-b0d4-4090-acba-66e98b7d312d-kube-api-access-mnbzk\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.575834 master-0 kubenswrapper[27819]: I0319 09:50:16.575337 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-scripts\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.575834 master-0 kubenswrapper[27819]: I0319 09:50:16.575374 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.576780 master-0 kubenswrapper[27819]: I0319 09:50:16.576137 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.576780 master-0 kubenswrapper[27819]: I0319 09:50:16.576657 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.579630 master-0 kubenswrapper[27819]: I0319 09:50:16.578686 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n559n\" (UniqueName: \"kubernetes.io/projected/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83-kube-api-access-n559n\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:16.579630 master-0 kubenswrapper[27819]: I0319 09:50:16.579010 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:50:16.579630 master-0 kubenswrapper[27819]: I0319 09:50:16.579045 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f29ae85-0c13-4e76-8b24-1c4fa0297661\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7d4225f0-f7d6-4624-89a5-82c14ec2a0d9\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/58ec998f89d7d030623e26d8a39bd73e9bd0e448d801c49e016e2ebd524f180c/globalmount\"" pod="openstack/ironic-conductor-0" Mar 19 09:50:16.584877 master-0 kubenswrapper[27819]: I0319 09:50:16.584252 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.585789 master-0 kubenswrapper[27819]: I0319 09:50:16.585747 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ca78928f-b0d4-4090-acba-66e98b7d312d-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.588407 master-0 kubenswrapper[27819]: I0319 09:50:16.587991 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.592176 master-0 kubenswrapper[27819]: I0319 09:50:16.592138 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-config-data\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.592487 master-0 kubenswrapper[27819]: I0319 09:50:16.592402 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca78928f-b0d4-4090-acba-66e98b7d312d-scripts\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.679393 master-0 kubenswrapper[27819]: I0319 09:50:16.679245 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnbzk\" (UniqueName: \"kubernetes.io/projected/ca78928f-b0d4-4090-acba-66e98b7d312d-kube-api-access-mnbzk\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:16.760058 master-0 kubenswrapper[27819]: I0319 09:50:16.759996 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756466c69-85qsr" event={"ID":"7faee39a-b070-48c9-afed-6de955551889","Type":"ContainerStarted","Data":"7c29502f529c87d3109a93e4a17aad9d4b4c04492048661f8bf362c26499978a"} Mar 19 09:50:16.761792 master-0 kubenswrapper[27819]: I0319 09:50:16.761390 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:16.793031 master-0 kubenswrapper[27819]: I0319 09:50:16.792934 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-mf842" Mar 19 09:50:16.795420 master-0 kubenswrapper[27819]: I0319 09:50:16.792991 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-mf842" event={"ID":"3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83","Type":"ContainerDied","Data":"8737bcc20c4096bc99265a8b671721d82cf89772901d4257e50bd2cb594839b3"} Mar 19 09:50:16.795523 master-0 kubenswrapper[27819]: I0319 09:50:16.795464 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8737bcc20c4096bc99265a8b671721d82cf89772901d4257e50bd2cb594839b3" Mar 19 09:50:16.829551 master-0 kubenswrapper[27819]: I0319 09:50:16.829197 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-756466c69-85qsr" podStartSLOduration=4.829169333 podStartE2EDuration="4.829169333s" podCreationTimestamp="2026-03-19 09:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:16.792933874 +0000 UTC m=+1001.714511566" watchObservedRunningTime="2026-03-19 09:50:16.829169333 +0000 UTC m=+1001.750747025" Mar 19 09:50:17.152566 master-0 kubenswrapper[27819]: I0319 09:50:17.147402 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:50:17.174191 master-0 kubenswrapper[27819]: I0319 09:50:17.174131 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:50:17.565386 master-0 kubenswrapper[27819]: I0319 09:50:17.565264 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-997449b9d-czw7t"] Mar 19 09:50:17.573705 master-0 kubenswrapper[27819]: I0319 09:50:17.573640 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.599735 master-0 kubenswrapper[27819]: I0319 09:50:17.594604 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-997449b9d-czw7t"] Mar 19 09:50:17.599735 master-0 kubenswrapper[27819]: I0319 09:50:17.596356 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-7c76f89d9c-mf42h"] Mar 19 09:50:17.599735 master-0 kubenswrapper[27819]: I0319 09:50:17.598694 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.605647 master-0 kubenswrapper[27819]: I0319 09:50:17.602381 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 19 09:50:17.605647 master-0 kubenswrapper[27819]: I0319 09:50:17.602642 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 19 09:50:17.646312 master-0 kubenswrapper[27819]: I0319 09:50:17.641418 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7c76f89d9c-mf42h"] Mar 19 09:50:17.762036 master-0 kubenswrapper[27819]: I0319 09:50:17.761975 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-scripts\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762036 master-0 kubenswrapper[27819]: I0319 09:50:17.762039 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762092 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/100aa4ed-2375-45c8-b22b-6b981a05d693-etc-podinfo\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762109 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-internal-tls-certs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762146 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data-custom\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762168 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-config-data\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762186 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data-merged\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762202 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/100aa4ed-2375-45c8-b22b-6b981a05d693-logs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762240 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-internal-tls-certs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762259 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-combined-ca-bundle\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762278 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-public-tls-certs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762298 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-logs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762343 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-public-tls-certs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762371 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-combined-ca-bundle\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762416 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9p74\" (UniqueName: \"kubernetes.io/projected/100aa4ed-2375-45c8-b22b-6b981a05d693-kube-api-access-h9p74\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762442 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65m2k\" (UniqueName: \"kubernetes.io/projected/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-kube-api-access-65m2k\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.762629 master-0 kubenswrapper[27819]: I0319 09:50:17.762468 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-scripts\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.867321 master-0 kubenswrapper[27819]: I0319 09:50:17.867203 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-internal-tls-certs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.867516 master-0 kubenswrapper[27819]: I0319 09:50:17.867452 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-combined-ca-bundle\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.867516 master-0 kubenswrapper[27819]: I0319 09:50:17.867485 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-public-tls-certs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.867613 master-0 kubenswrapper[27819]: I0319 09:50:17.867522 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-logs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.867613 master-0 kubenswrapper[27819]: I0319 09:50:17.867603 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-public-tls-certs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.868885 master-0 kubenswrapper[27819]: I0319 09:50:17.868828 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-combined-ca-bundle\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.869045 master-0 kubenswrapper[27819]: I0319 09:50:17.869014 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9p74\" (UniqueName: \"kubernetes.io/projected/100aa4ed-2375-45c8-b22b-6b981a05d693-kube-api-access-h9p74\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.869118 master-0 kubenswrapper[27819]: I0319 09:50:17.869088 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65m2k\" (UniqueName: \"kubernetes.io/projected/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-kube-api-access-65m2k\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.869170 master-0 kubenswrapper[27819]: I0319 09:50:17.869154 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-scripts\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.869767 master-0 kubenswrapper[27819]: I0319 09:50:17.869701 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-scripts\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.869864 master-0 kubenswrapper[27819]: I0319 09:50:17.869824 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.870237 master-0 kubenswrapper[27819]: I0319 09:50:17.870083 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/100aa4ed-2375-45c8-b22b-6b981a05d693-etc-podinfo\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.870237 master-0 kubenswrapper[27819]: I0319 09:50:17.870124 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-internal-tls-certs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.870365 master-0 kubenswrapper[27819]: I0319 09:50:17.870248 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data-custom\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.870365 master-0 kubenswrapper[27819]: I0319 09:50:17.870347 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-config-data\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.870457 master-0 kubenswrapper[27819]: I0319 09:50:17.870419 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data-merged\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.870498 master-0 kubenswrapper[27819]: I0319 09:50:17.870483 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/100aa4ed-2375-45c8-b22b-6b981a05d693-logs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.873791 master-0 kubenswrapper[27819]: I0319 09:50:17.871690 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-public-tls-certs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.873791 master-0 kubenswrapper[27819]: I0319 09:50:17.872606 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/100aa4ed-2375-45c8-b22b-6b981a05d693-logs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.877576 master-0 kubenswrapper[27819]: I0319 09:50:17.877177 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-logs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.877576 master-0 kubenswrapper[27819]: I0319 09:50:17.877420 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-internal-tls-certs\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.878372 master-0 kubenswrapper[27819]: I0319 09:50:17.878329 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/100aa4ed-2375-45c8-b22b-6b981a05d693-etc-podinfo\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.878826 master-0 kubenswrapper[27819]: I0319 09:50:17.878795 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data-merged\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.881317 master-0 kubenswrapper[27819]: I0319 09:50:17.881256 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data-custom\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.881446 master-0 kubenswrapper[27819]: I0319 09:50:17.881388 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-internal-tls-certs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.885141 master-0 kubenswrapper[27819]: I0319 09:50:17.882867 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-combined-ca-bundle\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.885141 master-0 kubenswrapper[27819]: I0319 09:50:17.883562 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-config-data\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.889138 master-0 kubenswrapper[27819]: I0319 09:50:17.887622 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-combined-ca-bundle\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.889138 master-0 kubenswrapper[27819]: I0319 09:50:17.887700 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/100aa4ed-2375-45c8-b22b-6b981a05d693-scripts\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.889138 master-0 kubenswrapper[27819]: I0319 09:50:17.888055 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-public-tls-certs\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.893000 master-0 kubenswrapper[27819]: I0319 09:50:17.891047 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9p74\" (UniqueName: \"kubernetes.io/projected/100aa4ed-2375-45c8-b22b-6b981a05d693-kube-api-access-h9p74\") pod \"ironic-7c76f89d9c-mf42h\" (UID: \"100aa4ed-2375-45c8-b22b-6b981a05d693\") " pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:17.893000 master-0 kubenswrapper[27819]: I0319 09:50:17.891695 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-config-data\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.895004 master-0 kubenswrapper[27819]: I0319 09:50:17.894964 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-scripts\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.903729 master-0 kubenswrapper[27819]: I0319 09:50:17.896244 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65m2k\" (UniqueName: \"kubernetes.io/projected/7a840c66-9d26-4751-86e3-8c6bfcbd9d4c-kube-api-access-65m2k\") pod \"placement-997449b9d-czw7t\" (UID: \"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c\") " pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.966755 master-0 kubenswrapper[27819]: I0319 09:50:17.966575 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:17.979643 master-0 kubenswrapper[27819]: I0319 09:50:17.979107 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:18.036743 master-0 kubenswrapper[27819]: I0319 09:50:18.036685 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f29ae85-0c13-4e76-8b24-1c4fa0297661\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7d4225f0-f7d6-4624-89a5-82c14ec2a0d9\") pod \"ironic-conductor-0\" (UID: \"ca78928f-b0d4-4090-acba-66e98b7d312d\") " pod="openstack/ironic-conductor-0" Mar 19 09:50:18.128233 master-0 kubenswrapper[27819]: I0319 09:50:18.128181 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:50:18.186429 master-0 kubenswrapper[27819]: I0319 09:50:18.171842 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 19 09:50:18.259742 master-0 kubenswrapper[27819]: I0319 09:50:18.259639 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:50:18.317819 master-0 kubenswrapper[27819]: I0319 09:50:18.317214 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-backup-0" Mar 19 09:50:18.765611 master-0 kubenswrapper[27819]: I0319 09:50:18.765563 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:18.878138 master-0 kubenswrapper[27819]: I0319 09:50:18.840673 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" Mar 19 09:50:18.878138 master-0 kubenswrapper[27819]: I0319 09:50:18.840618 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-fd91-account-create-update-2vq5k" event={"ID":"87b796c7-2fa1-405c-b2b4-5bccee70d82b","Type":"ContainerDied","Data":"618371f14c2ddb9b8557b631fe588cfd57626677fee50805cbd6f2bd8d2e62a3"} Mar 19 09:50:18.878138 master-0 kubenswrapper[27819]: I0319 09:50:18.840920 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="618371f14c2ddb9b8557b631fe588cfd57626677fee50805cbd6f2bd8d2e62a3" Mar 19 09:50:18.915020 master-0 kubenswrapper[27819]: I0319 09:50:18.905302 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg4h8\" (UniqueName: \"kubernetes.io/projected/87b796c7-2fa1-405c-b2b4-5bccee70d82b-kube-api-access-dg4h8\") pod \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " Mar 19 09:50:18.915020 master-0 kubenswrapper[27819]: I0319 09:50:18.905460 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b796c7-2fa1-405c-b2b4-5bccee70d82b-operator-scripts\") pod \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\" (UID: \"87b796c7-2fa1-405c-b2b4-5bccee70d82b\") " Mar 19 09:50:18.915020 master-0 kubenswrapper[27819]: I0319 09:50:18.906775 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b796c7-2fa1-405c-b2b4-5bccee70d82b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87b796c7-2fa1-405c-b2b4-5bccee70d82b" (UID: "87b796c7-2fa1-405c-b2b4-5bccee70d82b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:18.924223 master-0 kubenswrapper[27819]: I0319 09:50:18.924127 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b796c7-2fa1-405c-b2b4-5bccee70d82b-kube-api-access-dg4h8" (OuterVolumeSpecName: "kube-api-access-dg4h8") pod "87b796c7-2fa1-405c-b2b4-5bccee70d82b" (UID: "87b796c7-2fa1-405c-b2b4-5bccee70d82b"). InnerVolumeSpecName "kube-api-access-dg4h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:19.009579 master-0 kubenswrapper[27819]: I0319 09:50:19.008841 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg4h8\" (UniqueName: \"kubernetes.io/projected/87b796c7-2fa1-405c-b2b4-5bccee70d82b-kube-api-access-dg4h8\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:19.009579 master-0 kubenswrapper[27819]: I0319 09:50:19.008884 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b796c7-2fa1-405c-b2b4-5bccee70d82b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:19.571951 master-0 kubenswrapper[27819]: I0319 09:50:19.571875 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-7c76f89d9c-mf42h"] Mar 19 09:50:19.616881 master-0 kubenswrapper[27819]: I0319 09:50:19.615770 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-997449b9d-czw7t"] Mar 19 09:50:19.640625 master-0 kubenswrapper[27819]: I0319 09:50:19.640416 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 19 09:50:19.863566 master-0 kubenswrapper[27819]: I0319 09:50:19.861728 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" event={"ID":"4878f70b-b6db-4fbf-969d-3bb08df3d2bf","Type":"ContainerStarted","Data":"f76fc9059d476dee793d11571d1e9117cfe7c030b0310e7c1ce852a94eb12612"} Mar 19 09:50:19.863566 master-0 kubenswrapper[27819]: I0319 09:50:19.862221 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:19.868572 master-0 kubenswrapper[27819]: I0319 09:50:19.866651 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerStarted","Data":"612ac033f2dba04f31339a7c511bbd27f8fbcf146bd5c632c701676d87635e58"} Mar 19 09:50:19.872575 master-0 kubenswrapper[27819]: I0319 09:50:19.871791 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-997449b9d-czw7t" event={"ID":"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c","Type":"ContainerStarted","Data":"48de6e5d6665ac30ba9f550073aa55495bfeb4eac1434c493057e6830ec47ac7"} Mar 19 09:50:19.888999 master-0 kubenswrapper[27819]: I0319 09:50:19.888884 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7c76f89d9c-mf42h" event={"ID":"100aa4ed-2375-45c8-b22b-6b981a05d693","Type":"ContainerStarted","Data":"a2f388f4abf99b5e2ab604f3b4c3a205c936cbd34ca17bf066a913d125b27a75"} Mar 19 09:50:19.888999 master-0 kubenswrapper[27819]: I0319 09:50:19.888948 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7c76f89d9c-mf42h" event={"ID":"100aa4ed-2375-45c8-b22b-6b981a05d693","Type":"ContainerStarted","Data":"ef30ade699854ec96ae09b26f79ee5d6392489001944e98c486a51397fbfba61"} Mar 19 09:50:19.891591 master-0 kubenswrapper[27819]: I0319 09:50:19.891484 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" podStartSLOduration=3.71069734 podStartE2EDuration="7.891465999s" podCreationTimestamp="2026-03-19 09:50:12 +0000 UTC" firstStartedPulling="2026-03-19 09:50:14.612901532 +0000 UTC m=+999.534479224" lastFinishedPulling="2026-03-19 09:50:18.793670201 +0000 UTC m=+1003.715247883" observedRunningTime="2026-03-19 09:50:19.881148881 +0000 UTC m=+1004.802726573" watchObservedRunningTime="2026-03-19 09:50:19.891465999 +0000 UTC m=+1004.813043691" Mar 19 09:50:19.919661 master-0 kubenswrapper[27819]: I0319 09:50:19.919250 27819 generic.go:334] "Generic (PLEG): container finished" podID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerID="883dc9a3047bbbb8fc86232bb48c5713d73de92f0179b594fa43e0609fc897a3" exitCode=0 Mar 19 09:50:19.919661 master-0 kubenswrapper[27819]: I0319 09:50:19.919315 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerDied","Data":"883dc9a3047bbbb8fc86232bb48c5713d73de92f0179b594fa43e0609fc897a3"} Mar 19 09:50:20.936358 master-0 kubenswrapper[27819]: I0319 09:50:20.935256 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerStarted","Data":"977bb037f35286a4a57ef586b8fb055fa1a5ffdf832026eb4b41d40c36f536cb"} Mar 19 09:50:20.944563 master-0 kubenswrapper[27819]: I0319 09:50:20.942252 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-997449b9d-czw7t" event={"ID":"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c","Type":"ContainerStarted","Data":"aabfc915d82ee0a08ebb9c462d1a18e3a96ca7597670453a3187dd12dc60eb5b"} Mar 19 09:50:20.944563 master-0 kubenswrapper[27819]: I0319 09:50:20.942313 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-997449b9d-czw7t" event={"ID":"7a840c66-9d26-4751-86e3-8c6bfcbd9d4c","Type":"ContainerStarted","Data":"2b73370dfb6a7226dbea0ed3ade3a401161a46657a37256fb14d09093eb79ff7"} Mar 19 09:50:20.950719 master-0 kubenswrapper[27819]: I0319 09:50:20.945720 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:20.950719 master-0 kubenswrapper[27819]: I0319 09:50:20.945791 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:21.318215 master-0 kubenswrapper[27819]: I0319 09:50:21.318161 27819 generic.go:334] "Generic (PLEG): container finished" podID="100aa4ed-2375-45c8-b22b-6b981a05d693" containerID="a2f388f4abf99b5e2ab604f3b4c3a205c936cbd34ca17bf066a913d125b27a75" exitCode=0 Mar 19 09:50:21.340342 master-0 kubenswrapper[27819]: I0319 09:50:21.333844 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7c76f89d9c-mf42h" event={"ID":"100aa4ed-2375-45c8-b22b-6b981a05d693","Type":"ContainerDied","Data":"a2f388f4abf99b5e2ab604f3b4c3a205c936cbd34ca17bf066a913d125b27a75"} Mar 19 09:50:21.340342 master-0 kubenswrapper[27819]: I0319 09:50:21.333905 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerStarted","Data":"847f9c2d36c9509fbfa9d786c7b4f21691e3403331086ef74913656c5851d22d"} Mar 19 09:50:21.340342 master-0 kubenswrapper[27819]: I0319 09:50:21.333919 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerStarted","Data":"2245b37759f9e7ceaba465680a752c40c69e93067bdf2100afa3e5a4c5ac6abe"} Mar 19 09:50:21.340342 master-0 kubenswrapper[27819]: I0319 09:50:21.334070 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:21.708909 master-0 kubenswrapper[27819]: I0319 09:50:21.708825 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-657645bc55-77bfd" podStartSLOduration=4.723181293 podStartE2EDuration="8.708799576s" podCreationTimestamp="2026-03-19 09:50:13 +0000 UTC" firstStartedPulling="2026-03-19 09:50:14.789032965 +0000 UTC m=+999.710610657" lastFinishedPulling="2026-03-19 09:50:18.774651248 +0000 UTC m=+1003.696228940" observedRunningTime="2026-03-19 09:50:21.703749295 +0000 UTC m=+1006.625326997" watchObservedRunningTime="2026-03-19 09:50:21.708799576 +0000 UTC m=+1006.630377278" Mar 19 09:50:22.029533 master-0 kubenswrapper[27819]: I0319 09:50:22.027809 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-997449b9d-czw7t" podStartSLOduration=5.027779549 podStartE2EDuration="5.027779549s" podCreationTimestamp="2026-03-19 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:21.981768716 +0000 UTC m=+1006.903346418" watchObservedRunningTime="2026-03-19 09:50:22.027779549 +0000 UTC m=+1006.949357241" Mar 19 09:50:22.352585 master-0 kubenswrapper[27819]: I0319 09:50:22.351890 27819 generic.go:334] "Generic (PLEG): container finished" podID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerID="847f9c2d36c9509fbfa9d786c7b4f21691e3403331086ef74913656c5851d22d" exitCode=1 Mar 19 09:50:22.352585 master-0 kubenswrapper[27819]: I0319 09:50:22.351985 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerDied","Data":"847f9c2d36c9509fbfa9d786c7b4f21691e3403331086ef74913656c5851d22d"} Mar 19 09:50:22.354725 master-0 kubenswrapper[27819]: I0319 09:50:22.354684 27819 scope.go:117] "RemoveContainer" containerID="847f9c2d36c9509fbfa9d786c7b4f21691e3403331086ef74913656c5851d22d" Mar 19 09:50:22.356720 master-0 kubenswrapper[27819]: I0319 09:50:22.355771 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7c76f89d9c-mf42h" event={"ID":"100aa4ed-2375-45c8-b22b-6b981a05d693","Type":"ContainerStarted","Data":"a385a73e8d3913359f9045bc55e59af5150e5818b11029e32532a416a2c81570"} Mar 19 09:50:22.356720 master-0 kubenswrapper[27819]: I0319 09:50:22.355809 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-7c76f89d9c-mf42h" event={"ID":"100aa4ed-2375-45c8-b22b-6b981a05d693","Type":"ContainerStarted","Data":"df19e6d6a7bfeb06251ecbfb78aeda5100141a3de8b8bf9ca4f7b90315c8d505"} Mar 19 09:50:22.357915 master-0 kubenswrapper[27819]: I0319 09:50:22.357816 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:22.444507 master-0 kubenswrapper[27819]: I0319 09:50:22.444420 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-7c76f89d9c-mf42h" podStartSLOduration=5.44439809 podStartE2EDuration="5.44439809s" podCreationTimestamp="2026-03-19 09:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:22.414854306 +0000 UTC m=+1007.336432008" watchObservedRunningTime="2026-03-19 09:50:22.44439809 +0000 UTC m=+1007.365975782" Mar 19 09:50:23.383968 master-0 kubenswrapper[27819]: I0319 09:50:23.383833 27819 generic.go:334] "Generic (PLEG): container finished" podID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerID="05fff58e1a279d74683b5f373a4fc3cc00d85b1adfb3837d8174ac428245dcab" exitCode=1 Mar 19 09:50:23.385590 master-0 kubenswrapper[27819]: I0319 09:50:23.385447 27819 scope.go:117] "RemoveContainer" containerID="05fff58e1a279d74683b5f373a4fc3cc00d85b1adfb3837d8174ac428245dcab" Mar 19 09:50:23.385806 master-0 kubenswrapper[27819]: E0319 09:50:23.385740 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-657645bc55-77bfd_openstack(790a7fb4-f06a-47cf-ae30-70dfe7197f5e)\"" pod="openstack/ironic-657645bc55-77bfd" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" Mar 19 09:50:23.386136 master-0 kubenswrapper[27819]: I0319 09:50:23.386024 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerDied","Data":"05fff58e1a279d74683b5f373a4fc3cc00d85b1adfb3837d8174ac428245dcab"} Mar 19 09:50:23.386136 master-0 kubenswrapper[27819]: I0319 09:50:23.386078 27819 scope.go:117] "RemoveContainer" containerID="847f9c2d36c9509fbfa9d786c7b4f21691e3403331086ef74913656c5851d22d" Mar 19 09:50:23.400040 master-0 kubenswrapper[27819]: I0319 09:50:23.398317 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:50:23.421831 master-0 kubenswrapper[27819]: I0319 09:50:23.420314 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:23.588521 master-0 kubenswrapper[27819]: I0319 09:50:23.584016 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b94d96d9-jqhcm"] Mar 19 09:50:23.588521 master-0 kubenswrapper[27819]: I0319 09:50:23.584324 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerName="dnsmasq-dns" containerID="cri-o://f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc" gracePeriod=10 Mar 19 09:50:23.751715 master-0 kubenswrapper[27819]: I0319 09:50:23.751646 27819 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:23.751715 master-0 kubenswrapper[27819]: I0319 09:50:23.751716 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:23.910333 master-0 kubenswrapper[27819]: I0319 09:50:23.896912 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:50:23.983027 master-0 kubenswrapper[27819]: I0319 09:50:23.982989 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6799b89bd8-q5hf4" Mar 19 09:50:24.369061 master-0 kubenswrapper[27819]: I0319 09:50:24.367615 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:50:24.418570 master-0 kubenswrapper[27819]: I0319 09:50:24.416001 27819 generic.go:334] "Generic (PLEG): container finished" podID="ca78928f-b0d4-4090-acba-66e98b7d312d" containerID="977bb037f35286a4a57ef586b8fb055fa1a5ffdf832026eb4b41d40c36f536cb" exitCode=0 Mar 19 09:50:24.418570 master-0 kubenswrapper[27819]: I0319 09:50:24.416127 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerDied","Data":"977bb037f35286a4a57ef586b8fb055fa1a5ffdf832026eb4b41d40c36f536cb"} Mar 19 09:50:24.422959 master-0 kubenswrapper[27819]: I0319 09:50:24.422884 27819 generic.go:334] "Generic (PLEG): container finished" podID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerID="f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc" exitCode=0 Mar 19 09:50:24.423046 master-0 kubenswrapper[27819]: I0319 09:50:24.423019 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" event={"ID":"b67d2371-be56-42c1-9cc1-9323ed72cf7e","Type":"ContainerDied","Data":"f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc"} Mar 19 09:50:24.423089 master-0 kubenswrapper[27819]: I0319 09:50:24.423058 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" event={"ID":"b67d2371-be56-42c1-9cc1-9323ed72cf7e","Type":"ContainerDied","Data":"e58cb939cc3bd2b0736cdb3c28ef9773a2a8d2d4eee610c423166140544ed0a7"} Mar 19 09:50:24.423122 master-0 kubenswrapper[27819]: I0319 09:50:24.423106 27819 scope.go:117] "RemoveContainer" containerID="f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc" Mar 19 09:50:24.423316 master-0 kubenswrapper[27819]: I0319 09:50:24.423285 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" Mar 19 09:50:24.429710 master-0 kubenswrapper[27819]: I0319 09:50:24.429668 27819 scope.go:117] "RemoveContainer" containerID="05fff58e1a279d74683b5f373a4fc3cc00d85b1adfb3837d8174ac428245dcab" Mar 19 09:50:24.430073 master-0 kubenswrapper[27819]: E0319 09:50:24.430016 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-657645bc55-77bfd_openstack(790a7fb4-f06a-47cf-ae30-70dfe7197f5e)\"" pod="openstack/ironic-657645bc55-77bfd" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" Mar 19 09:50:24.538313 master-0 kubenswrapper[27819]: I0319 09:50:24.538229 27819 scope.go:117] "RemoveContainer" containerID="f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28" Mar 19 09:50:24.560469 master-0 kubenswrapper[27819]: I0319 09:50:24.560386 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr6xz\" (UniqueName: \"kubernetes.io/projected/b67d2371-be56-42c1-9cc1-9323ed72cf7e-kube-api-access-xr6xz\") pod \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " Mar 19 09:50:24.560775 master-0 kubenswrapper[27819]: I0319 09:50:24.560632 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-nb\") pod \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " Mar 19 09:50:24.560775 master-0 kubenswrapper[27819]: I0319 09:50:24.560737 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-config\") pod \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " Mar 19 09:50:24.560873 master-0 kubenswrapper[27819]: I0319 09:50:24.560803 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-svc\") pod \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " Mar 19 09:50:24.560873 master-0 kubenswrapper[27819]: I0319 09:50:24.560835 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-sb\") pod \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " Mar 19 09:50:24.560968 master-0 kubenswrapper[27819]: I0319 09:50:24.560884 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-swift-storage-0\") pod \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\" (UID: \"b67d2371-be56-42c1-9cc1-9323ed72cf7e\") " Mar 19 09:50:24.573335 master-0 kubenswrapper[27819]: I0319 09:50:24.573258 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67d2371-be56-42c1-9cc1-9323ed72cf7e-kube-api-access-xr6xz" (OuterVolumeSpecName: "kube-api-access-xr6xz") pod "b67d2371-be56-42c1-9cc1-9323ed72cf7e" (UID: "b67d2371-be56-42c1-9cc1-9323ed72cf7e"). InnerVolumeSpecName "kube-api-access-xr6xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:24.625119 master-0 kubenswrapper[27819]: I0319 09:50:24.625023 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b67d2371-be56-42c1-9cc1-9323ed72cf7e" (UID: "b67d2371-be56-42c1-9cc1-9323ed72cf7e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:24.641752 master-0 kubenswrapper[27819]: I0319 09:50:24.641647 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b67d2371-be56-42c1-9cc1-9323ed72cf7e" (UID: "b67d2371-be56-42c1-9cc1-9323ed72cf7e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:24.669673 master-0 kubenswrapper[27819]: I0319 09:50:24.669613 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr6xz\" (UniqueName: \"kubernetes.io/projected/b67d2371-be56-42c1-9cc1-9323ed72cf7e-kube-api-access-xr6xz\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:24.669673 master-0 kubenswrapper[27819]: I0319 09:50:24.669675 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:24.669998 master-0 kubenswrapper[27819]: I0319 09:50:24.669689 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:24.677379 master-0 kubenswrapper[27819]: I0319 09:50:24.677323 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b67d2371-be56-42c1-9cc1-9323ed72cf7e" (UID: "b67d2371-be56-42c1-9cc1-9323ed72cf7e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:24.678034 master-0 kubenswrapper[27819]: I0319 09:50:24.677878 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-config" (OuterVolumeSpecName: "config") pod "b67d2371-be56-42c1-9cc1-9323ed72cf7e" (UID: "b67d2371-be56-42c1-9cc1-9323ed72cf7e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:24.722330 master-0 kubenswrapper[27819]: I0319 09:50:24.722262 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b67d2371-be56-42c1-9cc1-9323ed72cf7e" (UID: "b67d2371-be56-42c1-9cc1-9323ed72cf7e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:24.777094 master-0 kubenswrapper[27819]: I0319 09:50:24.776409 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:24.777094 master-0 kubenswrapper[27819]: I0319 09:50:24.776454 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:24.777094 master-0 kubenswrapper[27819]: I0319 09:50:24.776465 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b67d2371-be56-42c1-9cc1-9323ed72cf7e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:24.799521 master-0 kubenswrapper[27819]: I0319 09:50:24.798675 27819 scope.go:117] "RemoveContainer" containerID="f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc" Mar 19 09:50:24.801388 master-0 kubenswrapper[27819]: E0319 09:50:24.800361 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc\": container with ID starting with f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc not found: ID does not exist" containerID="f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc" Mar 19 09:50:24.801388 master-0 kubenswrapper[27819]: I0319 09:50:24.800403 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc"} err="failed to get container status \"f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc\": rpc error: code = NotFound desc = could not find container \"f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc\": container with ID starting with f6475ab7941c3e21d25c9ea2cd7bee09fb581b7923d1adfe42d1fcda2b6a3bcc not found: ID does not exist" Mar 19 09:50:24.801388 master-0 kubenswrapper[27819]: I0319 09:50:24.800424 27819 scope.go:117] "RemoveContainer" containerID="f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28" Mar 19 09:50:24.801634 master-0 kubenswrapper[27819]: E0319 09:50:24.801457 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28\": container with ID starting with f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28 not found: ID does not exist" containerID="f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28" Mar 19 09:50:24.801634 master-0 kubenswrapper[27819]: I0319 09:50:24.801481 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28"} err="failed to get container status \"f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28\": rpc error: code = NotFound desc = could not find container \"f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28\": container with ID starting with f9500fd6719db6d54836b15ec5ef281dc24c23581b42ff3afb2f462f7357ce28 not found: ID does not exist" Mar 19 09:50:24.816982 master-0 kubenswrapper[27819]: I0319 09:50:24.816875 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79b94d96d9-jqhcm"] Mar 19 09:50:24.837603 master-0 kubenswrapper[27819]: I0319 09:50:24.836760 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79b94d96d9-jqhcm"] Mar 19 09:50:25.294136 master-0 kubenswrapper[27819]: I0319 09:50:25.294082 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" path="/var/lib/kubelet/pods/b67d2371-be56-42c1-9cc1-9323ed72cf7e/volumes" Mar 19 09:50:25.446042 master-0 kubenswrapper[27819]: I0319 09:50:25.445844 27819 scope.go:117] "RemoveContainer" containerID="05fff58e1a279d74683b5f373a4fc3cc00d85b1adfb3837d8174ac428245dcab" Mar 19 09:50:25.446815 master-0 kubenswrapper[27819]: E0319 09:50:25.446097 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-657645bc55-77bfd_openstack(790a7fb4-f06a-47cf-ae30-70dfe7197f5e)\"" pod="openstack/ironic-657645bc55-77bfd" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" Mar 19 09:50:26.458410 master-0 kubenswrapper[27819]: I0319 09:50:26.458352 27819 generic.go:334] "Generic (PLEG): container finished" podID="4878f70b-b6db-4fbf-969d-3bb08df3d2bf" containerID="f76fc9059d476dee793d11571d1e9117cfe7c030b0310e7c1ce852a94eb12612" exitCode=1 Mar 19 09:50:26.458410 master-0 kubenswrapper[27819]: I0319 09:50:26.458407 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" event={"ID":"4878f70b-b6db-4fbf-969d-3bb08df3d2bf","Type":"ContainerDied","Data":"f76fc9059d476dee793d11571d1e9117cfe7c030b0310e7c1ce852a94eb12612"} Mar 19 09:50:26.459178 master-0 kubenswrapper[27819]: I0319 09:50:26.459137 27819 scope.go:117] "RemoveContainer" containerID="f76fc9059d476dee793d11571d1e9117cfe7c030b0310e7c1ce852a94eb12612" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: I0319 09:50:27.047166 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: E0319 09:50:27.047802 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerName="dnsmasq-dns" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: I0319 09:50:27.047823 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerName="dnsmasq-dns" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: E0319 09:50:27.047860 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerName="init" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: I0319 09:50:27.047867 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerName="init" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: E0319 09:50:27.047890 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b796c7-2fa1-405c-b2b4-5bccee70d82b" containerName="mariadb-account-create-update" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: I0319 09:50:27.047897 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b796c7-2fa1-405c-b2b4-5bccee70d82b" containerName="mariadb-account-create-update" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: I0319 09:50:27.048170 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerName="dnsmasq-dns" Mar 19 09:50:27.048372 master-0 kubenswrapper[27819]: I0319 09:50:27.048206 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b796c7-2fa1-405c-b2b4-5bccee70d82b" containerName="mariadb-account-create-update" Mar 19 09:50:27.050498 master-0 kubenswrapper[27819]: I0319 09:50:27.048961 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 09:50:27.055161 master-0 kubenswrapper[27819]: I0319 09:50:27.051881 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 19 09:50:27.055161 master-0 kubenswrapper[27819]: I0319 09:50:27.052729 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 19 09:50:27.084831 master-0 kubenswrapper[27819]: I0319 09:50:27.079630 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 19 09:50:27.162601 master-0 kubenswrapper[27819]: I0319 09:50:27.160588 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-nbsd2"] Mar 19 09:50:27.162601 master-0 kubenswrapper[27819]: I0319 09:50:27.162582 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.174907 master-0 kubenswrapper[27819]: I0319 09:50:27.174100 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mlv\" (UniqueName: \"kubernetes.io/projected/efcb8964-4111-4878-bdb8-6b6ae1be884f-kube-api-access-k6mlv\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.174907 master-0 kubenswrapper[27819]: I0319 09:50:27.174297 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb8964-4111-4878-bdb8-6b6ae1be884f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.174907 master-0 kubenswrapper[27819]: I0319 09:50:27.174414 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/efcb8964-4111-4878-bdb8-6b6ae1be884f-openstack-config\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.174907 master-0 kubenswrapper[27819]: I0319 09:50:27.174624 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/efcb8964-4111-4878-bdb8-6b6ae1be884f-openstack-config-secret\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.182199 master-0 kubenswrapper[27819]: I0319 09:50:27.176298 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 19 09:50:27.182199 master-0 kubenswrapper[27819]: I0319 09:50:27.177033 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 19 09:50:27.193973 master-0 kubenswrapper[27819]: I0319 09:50:27.191233 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-nbsd2"] Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277200 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/efcb8964-4111-4878-bdb8-6b6ae1be884f-openstack-config-secret\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277291 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-etc-podinfo\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277340 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-config\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277410 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277443 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mlv\" (UniqueName: \"kubernetes.io/projected/efcb8964-4111-4878-bdb8-6b6ae1be884f-kube-api-access-k6mlv\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277522 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb8964-4111-4878-bdb8-6b6ae1be884f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277565 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-combined-ca-bundle\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.277698 master-0 kubenswrapper[27819]: I0319 09:50:27.277702 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.278213 master-0 kubenswrapper[27819]: I0319 09:50:27.277785 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/efcb8964-4111-4878-bdb8-6b6ae1be884f-openstack-config\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.278213 master-0 kubenswrapper[27819]: I0319 09:50:27.277877 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-scripts\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.278213 master-0 kubenswrapper[27819]: I0319 09:50:27.277906 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w2hh\" (UniqueName: \"kubernetes.io/projected/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-kube-api-access-9w2hh\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.287885 master-0 kubenswrapper[27819]: I0319 09:50:27.287158 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/efcb8964-4111-4878-bdb8-6b6ae1be884f-openstack-config\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.290366 master-0 kubenswrapper[27819]: I0319 09:50:27.290242 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efcb8964-4111-4878-bdb8-6b6ae1be884f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.291075 master-0 kubenswrapper[27819]: I0319 09:50:27.291049 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/efcb8964-4111-4878-bdb8-6b6ae1be884f-openstack-config-secret\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.303985 master-0 kubenswrapper[27819]: I0319 09:50:27.302300 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mlv\" (UniqueName: \"kubernetes.io/projected/efcb8964-4111-4878-bdb8-6b6ae1be884f-kube-api-access-k6mlv\") pod \"openstackclient\" (UID: \"efcb8964-4111-4878-bdb8-6b6ae1be884f\") " pod="openstack/openstackclient" Mar 19 09:50:27.371088 master-0 kubenswrapper[27819]: I0319 09:50:27.371058 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 19 09:50:27.380331 master-0 kubenswrapper[27819]: I0319 09:50:27.380291 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-combined-ca-bundle\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.380741 master-0 kubenswrapper[27819]: I0319 09:50:27.380725 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.381285 master-0 kubenswrapper[27819]: I0319 09:50:27.381240 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.381626 master-0 kubenswrapper[27819]: I0319 09:50:27.381608 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-scripts\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.381835 master-0 kubenswrapper[27819]: I0319 09:50:27.381817 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w2hh\" (UniqueName: \"kubernetes.io/projected/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-kube-api-access-9w2hh\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.382039 master-0 kubenswrapper[27819]: I0319 09:50:27.382019 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-etc-podinfo\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.382148 master-0 kubenswrapper[27819]: I0319 09:50:27.382134 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-config\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.382575 master-0 kubenswrapper[27819]: I0319 09:50:27.382554 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.383838 master-0 kubenswrapper[27819]: I0319 09:50:27.383758 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.395042 master-0 kubenswrapper[27819]: I0319 09:50:27.394297 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-etc-podinfo\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.395042 master-0 kubenswrapper[27819]: I0319 09:50:27.394819 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-combined-ca-bundle\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.396552 master-0 kubenswrapper[27819]: I0319 09:50:27.396438 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-config\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.420609 master-0 kubenswrapper[27819]: I0319 09:50:27.420557 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-scripts\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.421276 master-0 kubenswrapper[27819]: I0319 09:50:27.421247 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w2hh\" (UniqueName: \"kubernetes.io/projected/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-kube-api-access-9w2hh\") pod \"ironic-inspector-db-sync-nbsd2\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:27.480630 master-0 kubenswrapper[27819]: I0319 09:50:27.480565 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" event={"ID":"4878f70b-b6db-4fbf-969d-3bb08df3d2bf","Type":"ContainerStarted","Data":"24f28a32d66d5a2a6eaaa06a44a11dba3938043b3ca6d5ea8bb2dbc0bcb270f0"} Mar 19 09:50:27.483151 master-0 kubenswrapper[27819]: I0319 09:50:27.483131 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:27.512027 master-0 kubenswrapper[27819]: I0319 09:50:27.511981 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:28.871752 master-0 kubenswrapper[27819]: I0319 09:50:28.871683 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79b94d96d9-jqhcm" podUID="b67d2371-be56-42c1-9cc1-9323ed72cf7e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.218:5353: i/o timeout" Mar 19 09:50:29.093075 master-0 kubenswrapper[27819]: I0319 09:50:29.092970 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-255d6-api-0" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-api" probeResult="failure" output="Get \"http://10.128.0.219:8776/healthcheck\": dial tcp 10.128.0.219:8776: connect: connection refused" Mar 19 09:50:29.529150 master-0 kubenswrapper[27819]: I0319 09:50:29.529034 27819 generic.go:334] "Generic (PLEG): container finished" podID="321a18ac-d3af-4e1a-bae4-91188771886f" containerID="26d8e61b953a8bffcad3912a61a992666a47b045d8e247b02a42c08a67cc076e" exitCode=137 Mar 19 09:50:29.529150 master-0 kubenswrapper[27819]: I0319 09:50:29.529124 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"321a18ac-d3af-4e1a-bae4-91188771886f","Type":"ContainerDied","Data":"26d8e61b953a8bffcad3912a61a992666a47b045d8e247b02a42c08a67cc076e"} Mar 19 09:50:29.702526 master-0 kubenswrapper[27819]: I0319 09:50:29.702306 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-7c76f89d9c-mf42h" Mar 19 09:50:30.357951 master-0 kubenswrapper[27819]: I0319 09:50:30.357898 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-api-0" Mar 19 09:50:30.482976 master-0 kubenswrapper[27819]: I0319 09:50:30.482874 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/321a18ac-d3af-4e1a-bae4-91188771886f-logs\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.483315 master-0 kubenswrapper[27819]: I0319 09:50:30.483019 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb75s\" (UniqueName: \"kubernetes.io/projected/321a18ac-d3af-4e1a-bae4-91188771886f-kube-api-access-pb75s\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.483315 master-0 kubenswrapper[27819]: I0319 09:50:30.483080 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/321a18ac-d3af-4e1a-bae4-91188771886f-etc-machine-id\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.483315 master-0 kubenswrapper[27819]: I0319 09:50:30.483164 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.483315 master-0 kubenswrapper[27819]: I0319 09:50:30.483195 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-combined-ca-bundle\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.483315 master-0 kubenswrapper[27819]: I0319 09:50:30.483220 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data-custom\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.483315 master-0 kubenswrapper[27819]: I0319 09:50:30.483226 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/321a18ac-d3af-4e1a-bae4-91188771886f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 19 09:50:30.483315 master-0 kubenswrapper[27819]: I0319 09:50:30.483256 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-scripts\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.483949 master-0 kubenswrapper[27819]: I0319 09:50:30.483671 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/321a18ac-d3af-4e1a-bae4-91188771886f-logs" (OuterVolumeSpecName: "logs") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:30.484499 master-0 kubenswrapper[27819]: I0319 09:50:30.484451 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/321a18ac-d3af-4e1a-bae4-91188771886f-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:30.484499 master-0 kubenswrapper[27819]: I0319 09:50:30.484483 27819 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/321a18ac-d3af-4e1a-bae4-91188771886f-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:30.516795 master-0 kubenswrapper[27819]: I0319 09:50:30.512225 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:30.516795 master-0 kubenswrapper[27819]: I0319 09:50:30.514854 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/321a18ac-d3af-4e1a-bae4-91188771886f-kube-api-access-pb75s" (OuterVolumeSpecName: "kube-api-access-pb75s") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "kube-api-access-pb75s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:30.522122 master-0 kubenswrapper[27819]: I0319 09:50:30.521969 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-scripts" (OuterVolumeSpecName: "scripts") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:30.554994 master-0 kubenswrapper[27819]: I0319 09:50:30.554906 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:30.574159 master-0 kubenswrapper[27819]: I0319 09:50:30.574089 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"321a18ac-d3af-4e1a-bae4-91188771886f","Type":"ContainerDied","Data":"f9017babf2eb8cef987bbefc14233139fc84cfcd10bdde39519dfcd7bfd7b778"} Mar 19 09:50:30.574159 master-0 kubenswrapper[27819]: I0319 09:50:30.574153 27819 scope.go:117] "RemoveContainer" containerID="26d8e61b953a8bffcad3912a61a992666a47b045d8e247b02a42c08a67cc076e" Mar 19 09:50:30.574384 master-0 kubenswrapper[27819]: I0319 09:50:30.574168 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-api-0" Mar 19 09:50:30.585082 master-0 kubenswrapper[27819]: I0319 09:50:30.584525 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data" (OuterVolumeSpecName: "config-data") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:30.585578 master-0 kubenswrapper[27819]: I0319 09:50:30.585530 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data\") pod \"321a18ac-d3af-4e1a-bae4-91188771886f\" (UID: \"321a18ac-d3af-4e1a-bae4-91188771886f\") " Mar 19 09:50:30.585822 master-0 kubenswrapper[27819]: W0319 09:50:30.585793 27819 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/321a18ac-d3af-4e1a-bae4-91188771886f/volumes/kubernetes.io~secret/config-data Mar 19 09:50:30.585881 master-0 kubenswrapper[27819]: I0319 09:50:30.585829 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data" (OuterVolumeSpecName: "config-data") pod "321a18ac-d3af-4e1a-bae4-91188771886f" (UID: "321a18ac-d3af-4e1a-bae4-91188771886f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:30.586904 master-0 kubenswrapper[27819]: I0319 09:50:30.586786 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:30.586974 master-0 kubenswrapper[27819]: I0319 09:50:30.586925 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:30.586974 master-0 kubenswrapper[27819]: I0319 09:50:30.586945 27819 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:30.587051 master-0 kubenswrapper[27819]: I0319 09:50:30.586979 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/321a18ac-d3af-4e1a-bae4-91188771886f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:30.587051 master-0 kubenswrapper[27819]: I0319 09:50:30.586994 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb75s\" (UniqueName: \"kubernetes.io/projected/321a18ac-d3af-4e1a-bae4-91188771886f-kube-api-access-pb75s\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:30.721166 master-0 kubenswrapper[27819]: W0319 09:50:30.716337 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc92a9e_be85_46cb_beef_01cd2ded3c3a.slice/crio-10cb7d6329e9e6c4982a6e0327b4bf6334ff85df8e46bc487a18d4af2ad071c1 WatchSource:0}: Error finding container 10cb7d6329e9e6c4982a6e0327b4bf6334ff85df8e46bc487a18d4af2ad071c1: Status 404 returned error can't find the container with id 10cb7d6329e9e6c4982a6e0327b4bf6334ff85df8e46bc487a18d4af2ad071c1 Mar 19 09:50:30.794463 master-0 kubenswrapper[27819]: I0319 09:50:30.794130 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-nbsd2"] Mar 19 09:50:30.811616 master-0 kubenswrapper[27819]: I0319 09:50:30.807288 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-865fc7c8cc-c7jz9" Mar 19 09:50:30.844413 master-0 kubenswrapper[27819]: I0319 09:50:30.840571 27819 scope.go:117] "RemoveContainer" containerID="287c611a82a89cf125a13cd723a3fd9366fbea3a043315edd46a44f638932a1f" Mar 19 09:50:30.882633 master-0 kubenswrapper[27819]: I0319 09:50:30.875443 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 19 09:50:31.007847 master-0 kubenswrapper[27819]: I0319 09:50:31.005875 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-657645bc55-77bfd"] Mar 19 09:50:31.007847 master-0 kubenswrapper[27819]: I0319 09:50:31.006163 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-657645bc55-77bfd" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api-log" containerID="cri-o://2245b37759f9e7ceaba465680a752c40c69e93067bdf2100afa3e5a4c5ac6abe" gracePeriod=60 Mar 19 09:50:31.086801 master-0 kubenswrapper[27819]: I0319 09:50:31.086748 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-795d6cd54b-mpqdp"] Mar 19 09:50:31.087506 master-0 kubenswrapper[27819]: I0319 09:50:31.087458 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-795d6cd54b-mpqdp" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-api" containerID="cri-o://773154f727dad2f4223683f37ce3ffd5f94657ca52e7a6ad956b5383cc2eda4e" gracePeriod=30 Mar 19 09:50:31.087733 master-0 kubenswrapper[27819]: I0319 09:50:31.087712 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-795d6cd54b-mpqdp" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-httpd" containerID="cri-o://2e2b4283771e634f66af83b320d8bb1c6334ae2d8468bf708b29ea5ace96a062" gracePeriod=30 Mar 19 09:50:31.132570 master-0 kubenswrapper[27819]: I0319 09:50:31.127046 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:50:31.153921 master-0 kubenswrapper[27819]: I0319 09:50:31.147737 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:50:31.161560 master-0 kubenswrapper[27819]: I0319 09:50:31.158606 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:50:31.161560 master-0 kubenswrapper[27819]: E0319 09:50:31.159191 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-255d6-api-log" Mar 19 09:50:31.161560 master-0 kubenswrapper[27819]: I0319 09:50:31.159206 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-255d6-api-log" Mar 19 09:50:31.161560 master-0 kubenswrapper[27819]: E0319 09:50:31.159253 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-api" Mar 19 09:50:31.161560 master-0 kubenswrapper[27819]: I0319 09:50:31.159260 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-api" Mar 19 09:50:31.161560 master-0 kubenswrapper[27819]: I0319 09:50:31.159473 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-255d6-api-log" Mar 19 09:50:31.161560 master-0 kubenswrapper[27819]: I0319 09:50:31.159493 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" containerName="cinder-api" Mar 19 09:50:31.171769 master-0 kubenswrapper[27819]: I0319 09:50:31.166890 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.185565 master-0 kubenswrapper[27819]: I0319 09:50:31.172949 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-255d6-api-config-data" Mar 19 09:50:31.185565 master-0 kubenswrapper[27819]: I0319 09:50:31.173166 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 19 09:50:31.185565 master-0 kubenswrapper[27819]: I0319 09:50:31.173291 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 19 09:50:31.185565 master-0 kubenswrapper[27819]: I0319 09:50:31.182959 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.226747 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-scripts\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.226816 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-combined-ca-bundle\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.227017 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba79b34-019d-48a8-92db-d72841fe8936-logs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.227127 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-internal-tls-certs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.227170 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb5tr\" (UniqueName: \"kubernetes.io/projected/0ba79b34-019d-48a8-92db-d72841fe8936-kube-api-access-nb5tr\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.227266 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-config-data-custom\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.227298 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-config-data\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.227402 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba79b34-019d-48a8-92db-d72841fe8936-etc-machine-id\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.245566 master-0 kubenswrapper[27819]: I0319 09:50:31.227439 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-public-tls-certs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.300565 master-0 kubenswrapper[27819]: I0319 09:50:31.299597 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="321a18ac-d3af-4e1a-bae4-91188771886f" path="/var/lib/kubelet/pods/321a18ac-d3af-4e1a-bae4-91188771886f/volumes" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.337226 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-internal-tls-certs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.337302 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb5tr\" (UniqueName: \"kubernetes.io/projected/0ba79b34-019d-48a8-92db-d72841fe8936-kube-api-access-nb5tr\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.339825 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-config-data-custom\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.339897 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-config-data\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.339981 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba79b34-019d-48a8-92db-d72841fe8936-etc-machine-id\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.340029 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-public-tls-certs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.340139 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-scripts\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.340220 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-combined-ca-bundle\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.343629 master-0 kubenswrapper[27819]: I0319 09:50:31.340392 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba79b34-019d-48a8-92db-d72841fe8936-logs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.344047 master-0 kubenswrapper[27819]: I0319 09:50:31.343954 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ba79b34-019d-48a8-92db-d72841fe8936-logs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.348560 master-0 kubenswrapper[27819]: I0319 09:50:31.344561 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-internal-tls-certs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.348560 master-0 kubenswrapper[27819]: I0319 09:50:31.346395 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-combined-ca-bundle\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.348560 master-0 kubenswrapper[27819]: I0319 09:50:31.347401 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0ba79b34-019d-48a8-92db-d72841fe8936-etc-machine-id\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.352555 master-0 kubenswrapper[27819]: I0319 09:50:31.350997 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-config-data-custom\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.356554 master-0 kubenswrapper[27819]: I0319 09:50:31.353027 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-public-tls-certs\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.356554 master-0 kubenswrapper[27819]: I0319 09:50:31.355387 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-scripts\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.356554 master-0 kubenswrapper[27819]: I0319 09:50:31.356425 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba79b34-019d-48a8-92db-d72841fe8936-config-data\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.375876 master-0 kubenswrapper[27819]: I0319 09:50:31.372088 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb5tr\" (UniqueName: \"kubernetes.io/projected/0ba79b34-019d-48a8-92db-d72841fe8936-kube-api-access-nb5tr\") pod \"cinder-255d6-api-0\" (UID: \"0ba79b34-019d-48a8-92db-d72841fe8936\") " pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.539563 master-0 kubenswrapper[27819]: I0319 09:50:31.532032 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-255d6-api-0" Mar 19 09:50:31.592023 master-0 kubenswrapper[27819]: I0319 09:50:31.591410 27819 generic.go:334] "Generic (PLEG): container finished" podID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerID="2245b37759f9e7ceaba465680a752c40c69e93067bdf2100afa3e5a4c5ac6abe" exitCode=143 Mar 19 09:50:31.592023 master-0 kubenswrapper[27819]: I0319 09:50:31.591475 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerDied","Data":"2245b37759f9e7ceaba465680a752c40c69e93067bdf2100afa3e5a4c5ac6abe"} Mar 19 09:50:31.612631 master-0 kubenswrapper[27819]: I0319 09:50:31.594738 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"efcb8964-4111-4878-bdb8-6b6ae1be884f","Type":"ContainerStarted","Data":"291998355bafcaea1f0add79c590d659c4540cf02e90d0c828638d7d03369b05"} Mar 19 09:50:31.612631 master-0 kubenswrapper[27819]: I0319 09:50:31.605580 27819 generic.go:334] "Generic (PLEG): container finished" podID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerID="2e2b4283771e634f66af83b320d8bb1c6334ae2d8468bf708b29ea5ace96a062" exitCode=0 Mar 19 09:50:31.612631 master-0 kubenswrapper[27819]: I0319 09:50:31.605662 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795d6cd54b-mpqdp" event={"ID":"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0","Type":"ContainerDied","Data":"2e2b4283771e634f66af83b320d8bb1c6334ae2d8468bf708b29ea5ace96a062"} Mar 19 09:50:31.612631 master-0 kubenswrapper[27819]: I0319 09:50:31.607636 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nbsd2" event={"ID":"3fc92a9e-be85-46cb-beef-01cd2ded3c3a","Type":"ContainerStarted","Data":"10cb7d6329e9e6c4982a6e0327b4bf6334ff85df8e46bc487a18d4af2ad071c1"} Mar 19 09:50:31.736565 master-0 kubenswrapper[27819]: I0319 09:50:31.736223 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:31.791562 master-0 kubenswrapper[27819]: I0319 09:50:31.791390 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-merged\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.791562 master-0 kubenswrapper[27819]: I0319 09:50:31.791487 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-etc-podinfo\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.795564 master-0 kubenswrapper[27819]: I0319 09:50:31.792237 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:31.800559 master-0 kubenswrapper[27819]: I0319 09:50:31.796853 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.800559 master-0 kubenswrapper[27819]: I0319 09:50:31.798216 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-logs" (OuterVolumeSpecName: "logs") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:31.804125 master-0 kubenswrapper[27819]: I0319 09:50:31.797810 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-logs\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.804125 master-0 kubenswrapper[27819]: I0319 09:50:31.803270 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-custom\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.804125 master-0 kubenswrapper[27819]: I0319 09:50:31.803304 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-combined-ca-bundle\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.804125 master-0 kubenswrapper[27819]: I0319 09:50:31.803435 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-scripts\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.804125 master-0 kubenswrapper[27819]: I0319 09:50:31.803485 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqkqb\" (UniqueName: \"kubernetes.io/projected/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-kube-api-access-mqkqb\") pod \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\" (UID: \"790a7fb4-f06a-47cf-ae30-70dfe7197f5e\") " Mar 19 09:50:31.816791 master-0 kubenswrapper[27819]: I0319 09:50:31.804384 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 09:50:31.816791 master-0 kubenswrapper[27819]: I0319 09:50:31.805782 27819 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:31.816791 master-0 kubenswrapper[27819]: I0319 09:50:31.805802 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:31.816791 master-0 kubenswrapper[27819]: I0319 09:50:31.805812 27819 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:31.816791 master-0 kubenswrapper[27819]: I0319 09:50:31.812023 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-kube-api-access-mqkqb" (OuterVolumeSpecName: "kube-api-access-mqkqb") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "kube-api-access-mqkqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:31.829563 master-0 kubenswrapper[27819]: I0319 09:50:31.817976 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:31.869622 master-0 kubenswrapper[27819]: I0319 09:50:31.852736 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-scripts" (OuterVolumeSpecName: "scripts") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:31.869622 master-0 kubenswrapper[27819]: I0319 09:50:31.858134 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data" (OuterVolumeSpecName: "config-data") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:31.870729 master-0 kubenswrapper[27819]: I0319 09:50:31.870313 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "790a7fb4-f06a-47cf-ae30-70dfe7197f5e" (UID: "790a7fb4-f06a-47cf-ae30-70dfe7197f5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:31.908489 master-0 kubenswrapper[27819]: I0319 09:50:31.908192 27819 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:31.908489 master-0 kubenswrapper[27819]: I0319 09:50:31.908243 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:31.908489 master-0 kubenswrapper[27819]: I0319 09:50:31.908258 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:31.908489 master-0 kubenswrapper[27819]: I0319 09:50:31.908272 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqkqb\" (UniqueName: \"kubernetes.io/projected/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-kube-api-access-mqkqb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:31.908489 master-0 kubenswrapper[27819]: I0319 09:50:31.908288 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/790a7fb4-f06a-47cf-ae30-70dfe7197f5e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:32.117060 master-0 kubenswrapper[27819]: I0319 09:50:32.116516 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-255d6-api-0"] Mar 19 09:50:32.630215 master-0 kubenswrapper[27819]: I0319 09:50:32.630149 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"0ba79b34-019d-48a8-92db-d72841fe8936","Type":"ContainerStarted","Data":"9e21e583979255ee71a36012ee27e6399ef37a90c2aecef1f9a06bdc8192007f"} Mar 19 09:50:32.639656 master-0 kubenswrapper[27819]: I0319 09:50:32.635653 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-657645bc55-77bfd" event={"ID":"790a7fb4-f06a-47cf-ae30-70dfe7197f5e","Type":"ContainerDied","Data":"8ef062c13c254d9d73daced74a6e84372eaceae7c1a85f66dab201ff27a1ac7e"} Mar 19 09:50:32.639656 master-0 kubenswrapper[27819]: I0319 09:50:32.635722 27819 scope.go:117] "RemoveContainer" containerID="05fff58e1a279d74683b5f373a4fc3cc00d85b1adfb3837d8174ac428245dcab" Mar 19 09:50:32.639656 master-0 kubenswrapper[27819]: I0319 09:50:32.635751 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-657645bc55-77bfd" Mar 19 09:50:32.690640 master-0 kubenswrapper[27819]: I0319 09:50:32.690447 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-657645bc55-77bfd"] Mar 19 09:50:32.703740 master-0 kubenswrapper[27819]: I0319 09:50:32.703670 27819 scope.go:117] "RemoveContainer" containerID="2245b37759f9e7ceaba465680a752c40c69e93067bdf2100afa3e5a4c5ac6abe" Mar 19 09:50:32.705396 master-0 kubenswrapper[27819]: I0319 09:50:32.705293 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-657645bc55-77bfd"] Mar 19 09:50:33.303533 master-0 kubenswrapper[27819]: I0319 09:50:33.303402 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" path="/var/lib/kubelet/pods/790a7fb4-f06a-47cf-ae30-70dfe7197f5e/volumes" Mar 19 09:50:33.415305 master-0 kubenswrapper[27819]: I0319 09:50:33.415258 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:33.662955 master-0 kubenswrapper[27819]: I0319 09:50:33.662897 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"0ba79b34-019d-48a8-92db-d72841fe8936","Type":"ContainerStarted","Data":"16c4204ea5ce27eed3f6559200aca98f2e38f8371b120c8d3d882c139209a39f"} Mar 19 09:50:33.775659 master-0 kubenswrapper[27819]: I0319 09:50:33.775611 27819 scope.go:117] "RemoveContainer" containerID="883dc9a3047bbbb8fc86232bb48c5713d73de92f0179b594fa43e0609fc897a3" Mar 19 09:50:34.462152 master-0 kubenswrapper[27819]: I0319 09:50:34.462108 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5b7d5d8fdd-w8sck"] Mar 19 09:50:34.463053 master-0 kubenswrapper[27819]: E0319 09:50:34.463031 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="init" Mar 19 09:50:34.463150 master-0 kubenswrapper[27819]: I0319 09:50:34.463135 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="init" Mar 19 09:50:34.463258 master-0 kubenswrapper[27819]: E0319 09:50:34.463243 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api" Mar 19 09:50:34.463364 master-0 kubenswrapper[27819]: I0319 09:50:34.463350 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api" Mar 19 09:50:34.463488 master-0 kubenswrapper[27819]: E0319 09:50:34.463472 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api-log" Mar 19 09:50:34.463614 master-0 kubenswrapper[27819]: I0319 09:50:34.463598 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api-log" Mar 19 09:50:34.464030 master-0 kubenswrapper[27819]: I0319 09:50:34.464010 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api-log" Mar 19 09:50:34.464130 master-0 kubenswrapper[27819]: I0319 09:50:34.464115 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api" Mar 19 09:50:34.464223 master-0 kubenswrapper[27819]: I0319 09:50:34.464209 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api" Mar 19 09:50:34.465012 master-0 kubenswrapper[27819]: E0319 09:50:34.464992 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api" Mar 19 09:50:34.465146 master-0 kubenswrapper[27819]: I0319 09:50:34.465102 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="790a7fb4-f06a-47cf-ae30-70dfe7197f5e" containerName="ironic-api" Mar 19 09:50:34.476162 master-0 kubenswrapper[27819]: I0319 09:50:34.476099 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.480089 master-0 kubenswrapper[27819]: I0319 09:50:34.480050 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 19 09:50:34.480372 master-0 kubenswrapper[27819]: I0319 09:50:34.480116 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 19 09:50:34.480531 master-0 kubenswrapper[27819]: I0319 09:50:34.480169 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 19 09:50:34.495991 master-0 kubenswrapper[27819]: I0319 09:50:34.495912 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b7d5d8fdd-w8sck"] Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.579967 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a89bc907-2770-4f54-8d5f-d67538a7f50e-etc-swift\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.580105 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-internal-tls-certs\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.580165 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-combined-ca-bundle\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.580190 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89bc907-2770-4f54-8d5f-d67538a7f50e-log-httpd\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.580216 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn6rb\" (UniqueName: \"kubernetes.io/projected/a89bc907-2770-4f54-8d5f-d67538a7f50e-kube-api-access-kn6rb\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.580271 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-public-tls-certs\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.580299 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89bc907-2770-4f54-8d5f-d67538a7f50e-run-httpd\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.585641 master-0 kubenswrapper[27819]: I0319 09:50:34.580370 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-config-data\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.598566 master-0 kubenswrapper[27819]: I0319 09:50:34.594587 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5fvkc"] Mar 19 09:50:34.598566 master-0 kubenswrapper[27819]: I0319 09:50:34.596334 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:34.640803 master-0 kubenswrapper[27819]: I0319 09:50:34.635329 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5fvkc"] Mar 19 09:50:34.683151 master-0 kubenswrapper[27819]: I0319 09:50:34.683098 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-combined-ca-bundle\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.684314 master-0 kubenswrapper[27819]: I0319 09:50:34.684284 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89bc907-2770-4f54-8d5f-d67538a7f50e-log-httpd\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.684481 master-0 kubenswrapper[27819]: I0319 09:50:34.684459 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn6rb\" (UniqueName: \"kubernetes.io/projected/a89bc907-2770-4f54-8d5f-d67538a7f50e-kube-api-access-kn6rb\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.685141 master-0 kubenswrapper[27819]: I0319 09:50:34.685115 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nttln\" (UniqueName: \"kubernetes.io/projected/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-kube-api-access-nttln\") pod \"nova-api-db-create-5fvkc\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:34.685421 master-0 kubenswrapper[27819]: I0319 09:50:34.685360 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-public-tls-certs\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.685719 master-0 kubenswrapper[27819]: I0319 09:50:34.685627 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-operator-scripts\") pod \"nova-api-db-create-5fvkc\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:34.686013 master-0 kubenswrapper[27819]: I0319 09:50:34.685991 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89bc907-2770-4f54-8d5f-d67538a7f50e-run-httpd\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.686592 master-0 kubenswrapper[27819]: I0319 09:50:34.686568 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-config-data\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.686900 master-0 kubenswrapper[27819]: I0319 09:50:34.686571 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89bc907-2770-4f54-8d5f-d67538a7f50e-run-httpd\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.686900 master-0 kubenswrapper[27819]: I0319 09:50:34.685465 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a89bc907-2770-4f54-8d5f-d67538a7f50e-log-httpd\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.687018 master-0 kubenswrapper[27819]: I0319 09:50:34.686873 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a89bc907-2770-4f54-8d5f-d67538a7f50e-etc-swift\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.687177 master-0 kubenswrapper[27819]: I0319 09:50:34.687150 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-internal-tls-certs\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.689488 master-0 kubenswrapper[27819]: I0319 09:50:34.689405 27819 generic.go:334] "Generic (PLEG): container finished" podID="4878f70b-b6db-4fbf-969d-3bb08df3d2bf" containerID="24f28a32d66d5a2a6eaaa06a44a11dba3938043b3ca6d5ea8bb2dbc0bcb270f0" exitCode=1 Mar 19 09:50:34.689603 master-0 kubenswrapper[27819]: I0319 09:50:34.689507 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" event={"ID":"4878f70b-b6db-4fbf-969d-3bb08df3d2bf","Type":"ContainerDied","Data":"24f28a32d66d5a2a6eaaa06a44a11dba3938043b3ca6d5ea8bb2dbc0bcb270f0"} Mar 19 09:50:34.689603 master-0 kubenswrapper[27819]: I0319 09:50:34.689565 27819 scope.go:117] "RemoveContainer" containerID="f76fc9059d476dee793d11571d1e9117cfe7c030b0310e7c1ce852a94eb12612" Mar 19 09:50:34.690414 master-0 kubenswrapper[27819]: I0319 09:50:34.690358 27819 scope.go:117] "RemoveContainer" containerID="24f28a32d66d5a2a6eaaa06a44a11dba3938043b3ca6d5ea8bb2dbc0bcb270f0" Mar 19 09:50:34.690768 master-0 kubenswrapper[27819]: E0319 09:50:34.690719 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-78c8dcbbcd-c8rst_openstack(4878f70b-b6db-4fbf-969d-3bb08df3d2bf)\"" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" podUID="4878f70b-b6db-4fbf-969d-3bb08df3d2bf" Mar 19 09:50:34.690854 master-0 kubenswrapper[27819]: I0319 09:50:34.690769 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-public-tls-certs\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.694249 master-0 kubenswrapper[27819]: I0319 09:50:34.691992 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nbsd2" event={"ID":"3fc92a9e-be85-46cb-beef-01cd2ded3c3a","Type":"ContainerStarted","Data":"013a80d69723fbd7b8f9139ea3fbff2996507329f79450900b08b962f86aadb8"} Mar 19 09:50:34.697329 master-0 kubenswrapper[27819]: I0319 09:50:34.697291 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a89bc907-2770-4f54-8d5f-d67538a7f50e-etc-swift\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.708636 master-0 kubenswrapper[27819]: I0319 09:50:34.700206 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-internal-tls-certs\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.708636 master-0 kubenswrapper[27819]: I0319 09:50:34.700454 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-config-data\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.708636 master-0 kubenswrapper[27819]: I0319 09:50:34.702182 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-api-0" event={"ID":"0ba79b34-019d-48a8-92db-d72841fe8936","Type":"ContainerStarted","Data":"5f1bd217de277d5685fe74f711ca918e7a6ae5d6db143dcd5f5b45bd1791becb"} Mar 19 09:50:34.708636 master-0 kubenswrapper[27819]: I0319 09:50:34.703121 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-255d6-api-0" Mar 19 09:50:34.708636 master-0 kubenswrapper[27819]: I0319 09:50:34.705613 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a89bc907-2770-4f54-8d5f-d67538a7f50e-combined-ca-bundle\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.708636 master-0 kubenswrapper[27819]: I0319 09:50:34.707224 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn6rb\" (UniqueName: \"kubernetes.io/projected/a89bc907-2770-4f54-8d5f-d67538a7f50e-kube-api-access-kn6rb\") pod \"swift-proxy-5b7d5d8fdd-w8sck\" (UID: \"a89bc907-2770-4f54-8d5f-d67538a7f50e\") " pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.788206 master-0 kubenswrapper[27819]: I0319 09:50:34.788131 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-nbsd2" podStartSLOduration=4.750858938 podStartE2EDuration="7.788113395s" podCreationTimestamp="2026-03-19 09:50:27 +0000 UTC" firstStartedPulling="2026-03-19 09:50:30.838900854 +0000 UTC m=+1015.760478546" lastFinishedPulling="2026-03-19 09:50:33.876155311 +0000 UTC m=+1018.797733003" observedRunningTime="2026-03-19 09:50:34.756111396 +0000 UTC m=+1019.677689088" watchObservedRunningTime="2026-03-19 09:50:34.788113395 +0000 UTC m=+1019.709691087" Mar 19 09:50:34.803934 master-0 kubenswrapper[27819]: I0319 09:50:34.803864 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nttln\" (UniqueName: \"kubernetes.io/projected/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-kube-api-access-nttln\") pod \"nova-api-db-create-5fvkc\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:34.804233 master-0 kubenswrapper[27819]: I0319 09:50:34.804130 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-operator-scripts\") pod \"nova-api-db-create-5fvkc\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:34.813855 master-0 kubenswrapper[27819]: I0319 09:50:34.813808 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-operator-scripts\") pod \"nova-api-db-create-5fvkc\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:34.831240 master-0 kubenswrapper[27819]: I0319 09:50:34.831179 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:34.840420 master-0 kubenswrapper[27819]: I0319 09:50:34.840294 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nttln\" (UniqueName: \"kubernetes.io/projected/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-kube-api-access-nttln\") pod \"nova-api-db-create-5fvkc\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:34.842164 master-0 kubenswrapper[27819]: I0319 09:50:34.842095 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-255d6-api-0" podStartSLOduration=4.842072723 podStartE2EDuration="4.842072723s" podCreationTimestamp="2026-03-19 09:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:34.830953085 +0000 UTC m=+1019.752530767" watchObservedRunningTime="2026-03-19 09:50:34.842072723 +0000 UTC m=+1019.763650415" Mar 19 09:50:34.915454 master-0 kubenswrapper[27819]: I0319 09:50:34.915357 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:35.094812 master-0 kubenswrapper[27819]: I0319 09:50:35.092641 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-b706-account-create-update-xlr8d"] Mar 19 09:50:35.117948 master-0 kubenswrapper[27819]: I0319 09:50:35.117576 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.127582 master-0 kubenswrapper[27819]: I0319 09:50:35.125207 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 19 09:50:35.129624 master-0 kubenswrapper[27819]: I0319 09:50:35.128245 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lt5cm"] Mar 19 09:50:35.131915 master-0 kubenswrapper[27819]: I0319 09:50:35.130384 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.149998 master-0 kubenswrapper[27819]: I0319 09:50:35.149653 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b706-account-create-update-xlr8d"] Mar 19 09:50:35.181244 master-0 kubenswrapper[27819]: I0319 09:50:35.181198 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lt5cm"] Mar 19 09:50:35.195682 master-0 kubenswrapper[27819]: I0319 09:50:35.195503 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-wp7cf"] Mar 19 09:50:35.204835 master-0 kubenswrapper[27819]: I0319 09:50:35.204782 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.212713 master-0 kubenswrapper[27819]: I0319 09:50:35.212659 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wp7cf"] Mar 19 09:50:35.226819 master-0 kubenswrapper[27819]: I0319 09:50:35.226790 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8hq\" (UniqueName: \"kubernetes.io/projected/2d91904e-bd5c-4efd-85c9-569efa06f557-kube-api-access-vm8hq\") pod \"nova-cell0-db-create-lt5cm\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.227039 master-0 kubenswrapper[27819]: I0319 09:50:35.227024 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d91904e-bd5c-4efd-85c9-569efa06f557-operator-scripts\") pod \"nova-cell0-db-create-lt5cm\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.227222 master-0 kubenswrapper[27819]: I0319 09:50:35.227202 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twtw6\" (UniqueName: \"kubernetes.io/projected/4dde7f0b-6f7c-461d-9749-66777abb0610-kube-api-access-twtw6\") pod \"nova-api-b706-account-create-update-xlr8d\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.227340 master-0 kubenswrapper[27819]: I0319 09:50:35.227324 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dde7f0b-6f7c-461d-9749-66777abb0610-operator-scripts\") pod \"nova-api-b706-account-create-update-xlr8d\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.329280 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d91904e-bd5c-4efd-85c9-569efa06f557-operator-scripts\") pod \"nova-cell0-db-create-lt5cm\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.329412 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twtw6\" (UniqueName: \"kubernetes.io/projected/4dde7f0b-6f7c-461d-9749-66777abb0610-kube-api-access-twtw6\") pod \"nova-api-b706-account-create-update-xlr8d\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.329716 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dde7f0b-6f7c-461d-9749-66777abb0610-operator-scripts\") pod \"nova-api-b706-account-create-update-xlr8d\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.329761 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8hq\" (UniqueName: \"kubernetes.io/projected/2d91904e-bd5c-4efd-85c9-569efa06f557-kube-api-access-vm8hq\") pod \"nova-cell0-db-create-lt5cm\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.329799 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5kf2\" (UniqueName: \"kubernetes.io/projected/6f01e2e4-dcb6-4524-bf81-076a2768309d-kube-api-access-m5kf2\") pod \"nova-cell1-db-create-wp7cf\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.329825 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f01e2e4-dcb6-4524-bf81-076a2768309d-operator-scripts\") pod \"nova-cell1-db-create-wp7cf\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.330290 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d91904e-bd5c-4efd-85c9-569efa06f557-operator-scripts\") pod \"nova-cell0-db-create-lt5cm\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.333567 master-0 kubenswrapper[27819]: I0319 09:50:35.331143 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dde7f0b-6f7c-461d-9749-66777abb0610-operator-scripts\") pod \"nova-api-b706-account-create-update-xlr8d\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.388104 master-0 kubenswrapper[27819]: I0319 09:50:35.388043 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twtw6\" (UniqueName: \"kubernetes.io/projected/4dde7f0b-6f7c-461d-9749-66777abb0610-kube-api-access-twtw6\") pod \"nova-api-b706-account-create-update-xlr8d\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.395748 master-0 kubenswrapper[27819]: I0319 09:50:35.395712 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8hq\" (UniqueName: \"kubernetes.io/projected/2d91904e-bd5c-4efd-85c9-569efa06f557-kube-api-access-vm8hq\") pod \"nova-cell0-db-create-lt5cm\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.434562 master-0 kubenswrapper[27819]: I0319 09:50:35.433211 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5kf2\" (UniqueName: \"kubernetes.io/projected/6f01e2e4-dcb6-4524-bf81-076a2768309d-kube-api-access-m5kf2\") pod \"nova-cell1-db-create-wp7cf\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.434562 master-0 kubenswrapper[27819]: I0319 09:50:35.433288 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f01e2e4-dcb6-4524-bf81-076a2768309d-operator-scripts\") pod \"nova-cell1-db-create-wp7cf\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.434797 master-0 kubenswrapper[27819]: I0319 09:50:35.434687 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f01e2e4-dcb6-4524-bf81-076a2768309d-operator-scripts\") pod \"nova-cell1-db-create-wp7cf\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.457324 master-0 kubenswrapper[27819]: I0319 09:50:35.457278 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5kf2\" (UniqueName: \"kubernetes.io/projected/6f01e2e4-dcb6-4524-bf81-076a2768309d-kube-api-access-m5kf2\") pod \"nova-cell1-db-create-wp7cf\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.460010 master-0 kubenswrapper[27819]: I0319 09:50:35.459803 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:35.478410 master-0 kubenswrapper[27819]: I0319 09:50:35.478311 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:35.487585 master-0 kubenswrapper[27819]: I0319 09:50:35.486604 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ddf5-account-create-update-sl76f"] Mar 19 09:50:35.492686 master-0 kubenswrapper[27819]: I0319 09:50:35.489105 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ddf5-account-create-update-sl76f"] Mar 19 09:50:35.492686 master-0 kubenswrapper[27819]: I0319 09:50:35.489408 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.492686 master-0 kubenswrapper[27819]: I0319 09:50:35.489447 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c2c5-account-create-update-slm57"] Mar 19 09:50:35.494903 master-0 kubenswrapper[27819]: I0319 09:50:35.494849 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 19 09:50:35.495260 master-0 kubenswrapper[27819]: I0319 09:50:35.495208 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:35.504768 master-0 kubenswrapper[27819]: I0319 09:50:35.499643 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 19 09:50:35.504768 master-0 kubenswrapper[27819]: I0319 09:50:35.500023 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:35.522895 master-0 kubenswrapper[27819]: W0319 09:50:35.522426 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda89bc907_2770_4f54_8d5f_d67538a7f50e.slice/crio-00cc9e70c16a13adae3fccd95914d020d7a3c1a70ec37c95ce40a35a4dc208cc WatchSource:0}: Error finding container 00cc9e70c16a13adae3fccd95914d020d7a3c1a70ec37c95ce40a35a4dc208cc: Status 404 returned error can't find the container with id 00cc9e70c16a13adae3fccd95914d020d7a3c1a70ec37c95ce40a35a4dc208cc Mar 19 09:50:35.534721 master-0 kubenswrapper[27819]: I0319 09:50:35.534677 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c2c5-account-create-update-slm57"] Mar 19 09:50:35.536581 master-0 kubenswrapper[27819]: I0319 09:50:35.536530 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4e17be3-0a3b-485f-8259-6f2b66f275a6-operator-scripts\") pod \"nova-cell0-ddf5-account-create-update-sl76f\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.536884 master-0 kubenswrapper[27819]: I0319 09:50:35.536857 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfrd\" (UniqueName: \"kubernetes.io/projected/c4e17be3-0a3b-485f-8259-6f2b66f275a6-kube-api-access-6tfrd\") pod \"nova-cell0-ddf5-account-create-update-sl76f\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.548604 master-0 kubenswrapper[27819]: I0319 09:50:35.546489 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5b7d5d8fdd-w8sck"] Mar 19 09:50:35.643363 master-0 kubenswrapper[27819]: I0319 09:50:35.642595 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4e17be3-0a3b-485f-8259-6f2b66f275a6-operator-scripts\") pod \"nova-cell0-ddf5-account-create-update-sl76f\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.643363 master-0 kubenswrapper[27819]: I0319 09:50:35.642723 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8wq\" (UniqueName: \"kubernetes.io/projected/2828d124-ef3e-4f24-89ab-4eca7d22c966-kube-api-access-rb8wq\") pod \"nova-cell1-c2c5-account-create-update-slm57\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:35.643363 master-0 kubenswrapper[27819]: I0319 09:50:35.642767 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d124-ef3e-4f24-89ab-4eca7d22c966-operator-scripts\") pod \"nova-cell1-c2c5-account-create-update-slm57\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:35.643363 master-0 kubenswrapper[27819]: I0319 09:50:35.643319 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4e17be3-0a3b-485f-8259-6f2b66f275a6-operator-scripts\") pod \"nova-cell0-ddf5-account-create-update-sl76f\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.644787 master-0 kubenswrapper[27819]: I0319 09:50:35.642815 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfrd\" (UniqueName: \"kubernetes.io/projected/c4e17be3-0a3b-485f-8259-6f2b66f275a6-kube-api-access-6tfrd\") pod \"nova-cell0-ddf5-account-create-update-sl76f\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.662269 master-0 kubenswrapper[27819]: I0319 09:50:35.661190 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfrd\" (UniqueName: \"kubernetes.io/projected/c4e17be3-0a3b-485f-8259-6f2b66f275a6-kube-api-access-6tfrd\") pod \"nova-cell0-ddf5-account-create-update-sl76f\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.754625 master-0 kubenswrapper[27819]: I0319 09:50:35.750152 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8wq\" (UniqueName: \"kubernetes.io/projected/2828d124-ef3e-4f24-89ab-4eca7d22c966-kube-api-access-rb8wq\") pod \"nova-cell1-c2c5-account-create-update-slm57\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:35.754625 master-0 kubenswrapper[27819]: I0319 09:50:35.750253 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d124-ef3e-4f24-89ab-4eca7d22c966-operator-scripts\") pod \"nova-cell1-c2c5-account-create-update-slm57\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:35.760600 master-0 kubenswrapper[27819]: I0319 09:50:35.758718 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d124-ef3e-4f24-89ab-4eca7d22c966-operator-scripts\") pod \"nova-cell1-c2c5-account-create-update-slm57\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:35.785091 master-0 kubenswrapper[27819]: I0319 09:50:35.783475 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8wq\" (UniqueName: \"kubernetes.io/projected/2828d124-ef3e-4f24-89ab-4eca7d22c966-kube-api-access-rb8wq\") pod \"nova-cell1-c2c5-account-create-update-slm57\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:35.800707 master-0 kubenswrapper[27819]: I0319 09:50:35.799812 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" event={"ID":"a89bc907-2770-4f54-8d5f-d67538a7f50e","Type":"ContainerStarted","Data":"00cc9e70c16a13adae3fccd95914d020d7a3c1a70ec37c95ce40a35a4dc208cc"} Mar 19 09:50:35.834194 master-0 kubenswrapper[27819]: I0319 09:50:35.833272 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:35.847189 master-0 kubenswrapper[27819]: I0319 09:50:35.847124 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:36.120573 master-0 kubenswrapper[27819]: I0319 09:50:36.116446 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5fvkc"] Mar 19 09:50:36.354421 master-0 kubenswrapper[27819]: I0319 09:50:36.354374 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-wp7cf"] Mar 19 09:50:36.816307 master-0 kubenswrapper[27819]: I0319 09:50:36.816255 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wp7cf" event={"ID":"6f01e2e4-dcb6-4524-bf81-076a2768309d","Type":"ContainerStarted","Data":"5fd2f916ebbcf63bd81d75b952ae169217a1bb4e6428f0c8481aeb088c2f10ba"} Mar 19 09:50:36.816936 master-0 kubenswrapper[27819]: I0319 09:50:36.816323 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wp7cf" event={"ID":"6f01e2e4-dcb6-4524-bf81-076a2768309d","Type":"ContainerStarted","Data":"d79df9bf529f9c6436a2915e431c3ec56275ca3153968405c36812544fb0bcae"} Mar 19 09:50:36.818393 master-0 kubenswrapper[27819]: I0319 09:50:36.818369 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5fvkc" event={"ID":"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4","Type":"ContainerStarted","Data":"f19c21ac1eff2e7b5b276132cc723f0087f538ec157ad23acf28d1e571aa62bd"} Mar 19 09:50:36.818471 master-0 kubenswrapper[27819]: I0319 09:50:36.818402 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5fvkc" event={"ID":"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4","Type":"ContainerStarted","Data":"97b1ca44df1b3bd730585bda3ef8bb7be1061c7c0f54daa5b34b70c9275b97f1"} Mar 19 09:50:36.821003 master-0 kubenswrapper[27819]: I0319 09:50:36.820959 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" event={"ID":"a89bc907-2770-4f54-8d5f-d67538a7f50e","Type":"ContainerStarted","Data":"771de046ca4c79a0567daed6d276682f135d41968097408e35fcb6748fdacbad"} Mar 19 09:50:36.881300 master-0 kubenswrapper[27819]: I0319 09:50:36.881253 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-b706-account-create-update-xlr8d"] Mar 19 09:50:36.886229 master-0 kubenswrapper[27819]: I0319 09:50:36.886154 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 19 09:50:36.907654 master-0 kubenswrapper[27819]: I0319 09:50:36.906620 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lt5cm"] Mar 19 09:50:37.262087 master-0 kubenswrapper[27819]: W0319 09:50:37.262039 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4e17be3_0a3b_485f_8259_6f2b66f275a6.slice/crio-c00393c9c13dc17501c78b53cf9cb6f7c9a3c91df93fb5a642f51faf083f0ca8 WatchSource:0}: Error finding container c00393c9c13dc17501c78b53cf9cb6f7c9a3c91df93fb5a642f51faf083f0ca8: Status 404 returned error can't find the container with id c00393c9c13dc17501c78b53cf9cb6f7c9a3c91df93fb5a642f51faf083f0ca8 Mar 19 09:50:37.265727 master-0 kubenswrapper[27819]: W0319 09:50:37.265667 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2828d124_ef3e_4f24_89ab_4eca7d22c966.slice/crio-e055ee0af1183f03b7337ed49dcf9d1812bcfb1f7bcf75c98665779aab32b0f1 WatchSource:0}: Error finding container e055ee0af1183f03b7337ed49dcf9d1812bcfb1f7bcf75c98665779aab32b0f1: Status 404 returned error can't find the container with id e055ee0af1183f03b7337ed49dcf9d1812bcfb1f7bcf75c98665779aab32b0f1 Mar 19 09:50:37.269161 master-0 kubenswrapper[27819]: I0319 09:50:37.268878 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c2c5-account-create-update-slm57"] Mar 19 09:50:37.339329 master-0 kubenswrapper[27819]: I0319 09:50:37.339186 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ddf5-account-create-update-sl76f"] Mar 19 09:50:37.762853 master-0 kubenswrapper[27819]: E0319 09:50:37.762739 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a6d30b_4ca2_469f_9ccc_35bb03d09cc4.slice/crio-conmon-f19c21ac1eff2e7b5b276132cc723f0087f538ec157ad23acf28d1e571aa62bd.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:50:37.762853 master-0 kubenswrapper[27819]: E0319 09:50:37.762811 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a6d30b_4ca2_469f_9ccc_35bb03d09cc4.slice/crio-conmon-f19c21ac1eff2e7b5b276132cc723f0087f538ec157ad23acf28d1e571aa62bd.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:50:37.846789 master-0 kubenswrapper[27819]: I0319 09:50:37.846726 27819 generic.go:334] "Generic (PLEG): container finished" podID="c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4" containerID="f19c21ac1eff2e7b5b276132cc723f0087f538ec157ad23acf28d1e571aa62bd" exitCode=0 Mar 19 09:50:37.847413 master-0 kubenswrapper[27819]: I0319 09:50:37.847390 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5fvkc" event={"ID":"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4","Type":"ContainerDied","Data":"f19c21ac1eff2e7b5b276132cc723f0087f538ec157ad23acf28d1e571aa62bd"} Mar 19 09:50:37.866977 master-0 kubenswrapper[27819]: I0319 09:50:37.866908 27819 generic.go:334] "Generic (PLEG): container finished" podID="3fc92a9e-be85-46cb-beef-01cd2ded3c3a" containerID="013a80d69723fbd7b8f9139ea3fbff2996507329f79450900b08b962f86aadb8" exitCode=0 Mar 19 09:50:37.867160 master-0 kubenswrapper[27819]: I0319 09:50:37.867074 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nbsd2" event={"ID":"3fc92a9e-be85-46cb-beef-01cd2ded3c3a","Type":"ContainerDied","Data":"013a80d69723fbd7b8f9139ea3fbff2996507329f79450900b08b962f86aadb8"} Mar 19 09:50:37.881986 master-0 kubenswrapper[27819]: I0319 09:50:37.881931 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" event={"ID":"a89bc907-2770-4f54-8d5f-d67538a7f50e","Type":"ContainerStarted","Data":"8a1f147ef61d51ea26a5163fd77d991d82a16f0052ddf6203ce9a20cd1fc871b"} Mar 19 09:50:37.882836 master-0 kubenswrapper[27819]: I0319 09:50:37.882432 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:37.882836 master-0 kubenswrapper[27819]: I0319 09:50:37.882655 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:37.884983 master-0 kubenswrapper[27819]: I0319 09:50:37.884924 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" event={"ID":"2828d124-ef3e-4f24-89ab-4eca7d22c966","Type":"ContainerStarted","Data":"c72f57e99001c8e3c467bcb91a14f18bbb6c18a2f184bc10d2f9242d2426f269"} Mar 19 09:50:37.885071 master-0 kubenswrapper[27819]: I0319 09:50:37.884995 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" event={"ID":"2828d124-ef3e-4f24-89ab-4eca7d22c966","Type":"ContainerStarted","Data":"e055ee0af1183f03b7337ed49dcf9d1812bcfb1f7bcf75c98665779aab32b0f1"} Mar 19 09:50:37.887200 master-0 kubenswrapper[27819]: I0319 09:50:37.887141 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b706-account-create-update-xlr8d" event={"ID":"4dde7f0b-6f7c-461d-9749-66777abb0610","Type":"ContainerStarted","Data":"505d3486d4d800c7f96de2e2e31705acf6b2d6075e08ede90082502df9785fe3"} Mar 19 09:50:37.887200 master-0 kubenswrapper[27819]: I0319 09:50:37.887195 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b706-account-create-update-xlr8d" event={"ID":"4dde7f0b-6f7c-461d-9749-66777abb0610","Type":"ContainerStarted","Data":"c64420a2759833d38517d0b34e09b8b93b99a4d2e9289144cff8c42aa6addb64"} Mar 19 09:50:37.896000 master-0 kubenswrapper[27819]: I0319 09:50:37.895935 27819 generic.go:334] "Generic (PLEG): container finished" podID="6f01e2e4-dcb6-4524-bf81-076a2768309d" containerID="5fd2f916ebbcf63bd81d75b952ae169217a1bb4e6428f0c8481aeb088c2f10ba" exitCode=0 Mar 19 09:50:37.896915 master-0 kubenswrapper[27819]: I0319 09:50:37.896410 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wp7cf" event={"ID":"6f01e2e4-dcb6-4524-bf81-076a2768309d","Type":"ContainerDied","Data":"5fd2f916ebbcf63bd81d75b952ae169217a1bb4e6428f0c8481aeb088c2f10ba"} Mar 19 09:50:37.900172 master-0 kubenswrapper[27819]: I0319 09:50:37.900128 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" event={"ID":"c4e17be3-0a3b-485f-8259-6f2b66f275a6","Type":"ContainerStarted","Data":"2d9370bd03b3b66b345c07fc6d14826e7547d65fcec0b74c8b9cfb0f6cbccb04"} Mar 19 09:50:37.900898 master-0 kubenswrapper[27819]: I0319 09:50:37.900873 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" event={"ID":"c4e17be3-0a3b-485f-8259-6f2b66f275a6","Type":"ContainerStarted","Data":"c00393c9c13dc17501c78b53cf9cb6f7c9a3c91df93fb5a642f51faf083f0ca8"} Mar 19 09:50:37.903189 master-0 kubenswrapper[27819]: I0319 09:50:37.903160 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lt5cm" event={"ID":"2d91904e-bd5c-4efd-85c9-569efa06f557","Type":"ContainerStarted","Data":"89c3c6a975cbc32e5a46634dd1b98c41e029adc09b187e8e674d3702acf5166d"} Mar 19 09:50:37.903353 master-0 kubenswrapper[27819]: I0319 09:50:37.903333 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lt5cm" event={"ID":"2d91904e-bd5c-4efd-85c9-569efa06f557","Type":"ContainerStarted","Data":"0aefe288f42d6cf4c37afddefc81c4a3e1fc9d8f64517797d1ea075038fdd48f"} Mar 19 09:50:38.005270 master-0 kubenswrapper[27819]: I0319 09:50:38.005172 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" podStartSLOduration=3.005152819 podStartE2EDuration="3.005152819s" podCreationTimestamp="2026-03-19 09:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:37.942520567 +0000 UTC m=+1022.864098259" watchObservedRunningTime="2026-03-19 09:50:38.005152819 +0000 UTC m=+1022.926730511" Mar 19 09:50:38.041874 master-0 kubenswrapper[27819]: I0319 09:50:38.041798 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" podStartSLOduration=3.041781758 podStartE2EDuration="3.041781758s" podCreationTimestamp="2026-03-19 09:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:38.012672835 +0000 UTC m=+1022.934250547" watchObservedRunningTime="2026-03-19 09:50:38.041781758 +0000 UTC m=+1022.963359450" Mar 19 09:50:38.067042 master-0 kubenswrapper[27819]: I0319 09:50:38.066777 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" podStartSLOduration=4.066757405 podStartE2EDuration="4.066757405s" podCreationTimestamp="2026-03-19 09:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:38.046946312 +0000 UTC m=+1022.968524014" watchObservedRunningTime="2026-03-19 09:50:38.066757405 +0000 UTC m=+1022.988335107" Mar 19 09:50:38.128858 master-0 kubenswrapper[27819]: I0319 09:50:38.128647 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-b706-account-create-update-xlr8d" podStartSLOduration=4.128627078 podStartE2EDuration="4.128627078s" podCreationTimestamp="2026-03-19 09:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:38.122053228 +0000 UTC m=+1023.043630930" watchObservedRunningTime="2026-03-19 09:50:38.128627078 +0000 UTC m=+1023.050204770" Mar 19 09:50:38.138144 master-0 kubenswrapper[27819]: I0319 09:50:38.134841 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lt5cm" podStartSLOduration=4.134827728 podStartE2EDuration="4.134827728s" podCreationTimestamp="2026-03-19 09:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:50:38.090252384 +0000 UTC m=+1023.011830076" watchObservedRunningTime="2026-03-19 09:50:38.134827728 +0000 UTC m=+1023.056405410" Mar 19 09:50:38.367859 master-0 kubenswrapper[27819]: I0319 09:50:38.367663 27819 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:38.367859 master-0 kubenswrapper[27819]: I0319 09:50:38.367791 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:38.368256 master-0 kubenswrapper[27819]: I0319 09:50:38.368215 27819 scope.go:117] "RemoveContainer" containerID="24f28a32d66d5a2a6eaaa06a44a11dba3938043b3ca6d5ea8bb2dbc0bcb270f0" Mar 19 09:50:38.368924 master-0 kubenswrapper[27819]: E0319 09:50:38.368459 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-78c8dcbbcd-c8rst_openstack(4878f70b-b6db-4fbf-969d-3bb08df3d2bf)\"" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" podUID="4878f70b-b6db-4fbf-969d-3bb08df3d2bf" Mar 19 09:50:38.771920 master-0 kubenswrapper[27819]: I0319 09:50:38.771852 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:50:38.772408 master-0 kubenswrapper[27819]: I0319 09:50:38.772356 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-ae80b-default-external-api-0" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-log" containerID="cri-o://cd060a3700040059fc15ffe780dadd9e26169e1931b5f1db2bbdf671a5b7c41f" gracePeriod=30 Mar 19 09:50:38.772463 master-0 kubenswrapper[27819]: I0319 09:50:38.772408 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-ae80b-default-external-api-0" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-httpd" containerID="cri-o://d13f6ea63b1ae6ffc84de07f836582898750d3654cc788d090679a0507be935e" gracePeriod=30 Mar 19 09:50:38.924372 master-0 kubenswrapper[27819]: I0319 09:50:38.924317 27819 generic.go:334] "Generic (PLEG): container finished" podID="2828d124-ef3e-4f24-89ab-4eca7d22c966" containerID="c72f57e99001c8e3c467bcb91a14f18bbb6c18a2f184bc10d2f9242d2426f269" exitCode=0 Mar 19 09:50:38.925012 master-0 kubenswrapper[27819]: I0319 09:50:38.924408 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" event={"ID":"2828d124-ef3e-4f24-89ab-4eca7d22c966","Type":"ContainerDied","Data":"c72f57e99001c8e3c467bcb91a14f18bbb6c18a2f184bc10d2f9242d2426f269"} Mar 19 09:50:38.928433 master-0 kubenswrapper[27819]: I0319 09:50:38.928400 27819 generic.go:334] "Generic (PLEG): container finished" podID="4dde7f0b-6f7c-461d-9749-66777abb0610" containerID="505d3486d4d800c7f96de2e2e31705acf6b2d6075e08ede90082502df9785fe3" exitCode=0 Mar 19 09:50:38.928570 master-0 kubenswrapper[27819]: I0319 09:50:38.928468 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b706-account-create-update-xlr8d" event={"ID":"4dde7f0b-6f7c-461d-9749-66777abb0610","Type":"ContainerDied","Data":"505d3486d4d800c7f96de2e2e31705acf6b2d6075e08ede90082502df9785fe3"} Mar 19 09:50:38.931264 master-0 kubenswrapper[27819]: I0319 09:50:38.931119 27819 generic.go:334] "Generic (PLEG): container finished" podID="c4e17be3-0a3b-485f-8259-6f2b66f275a6" containerID="2d9370bd03b3b66b345c07fc6d14826e7547d65fcec0b74c8b9cfb0f6cbccb04" exitCode=0 Mar 19 09:50:38.931488 master-0 kubenswrapper[27819]: I0319 09:50:38.931187 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" event={"ID":"c4e17be3-0a3b-485f-8259-6f2b66f275a6","Type":"ContainerDied","Data":"2d9370bd03b3b66b345c07fc6d14826e7547d65fcec0b74c8b9cfb0f6cbccb04"} Mar 19 09:50:38.941275 master-0 kubenswrapper[27819]: I0319 09:50:38.941228 27819 generic.go:334] "Generic (PLEG): container finished" podID="2d91904e-bd5c-4efd-85c9-569efa06f557" containerID="89c3c6a975cbc32e5a46634dd1b98c41e029adc09b187e8e674d3702acf5166d" exitCode=0 Mar 19 09:50:38.941492 master-0 kubenswrapper[27819]: I0319 09:50:38.941358 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lt5cm" event={"ID":"2d91904e-bd5c-4efd-85c9-569efa06f557","Type":"ContainerDied","Data":"89c3c6a975cbc32e5a46634dd1b98c41e029adc09b187e8e674d3702acf5166d"} Mar 19 09:50:38.943688 master-0 kubenswrapper[27819]: I0319 09:50:38.942382 27819 scope.go:117] "RemoveContainer" containerID="24f28a32d66d5a2a6eaaa06a44a11dba3938043b3ca6d5ea8bb2dbc0bcb270f0" Mar 19 09:50:38.966099 master-0 kubenswrapper[27819]: E0319 09:50:38.966011 27819 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-78c8dcbbcd-c8rst_openstack(4878f70b-b6db-4fbf-969d-3bb08df3d2bf)\"" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" podUID="4878f70b-b6db-4fbf-969d-3bb08df3d2bf" Mar 19 09:50:40.370872 master-0 kubenswrapper[27819]: I0319 09:50:40.370826 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:50:40.371685 master-0 kubenswrapper[27819]: I0319 09:50:40.371658 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-ae80b-default-internal-api-0" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-log" containerID="cri-o://ce484e028154d421d4730fff341621ec63228c22e5e146b1debaf7b1e9d89357" gracePeriod=30 Mar 19 09:50:40.371967 master-0 kubenswrapper[27819]: I0319 09:50:40.371937 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-ae80b-default-internal-api-0" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-httpd" containerID="cri-o://9333f4d16522e3ef2410a1d2292df7fa579e47ba4d510e97031b4124411d5eb1" gracePeriod=30 Mar 19 09:50:44.680465 master-0 kubenswrapper[27819]: I0319 09:50:44.680249 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-255d6-api-0" Mar 19 09:50:44.843731 master-0 kubenswrapper[27819]: I0319 09:50:44.840878 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:44.844711 master-0 kubenswrapper[27819]: I0319 09:50:44.844131 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5b7d5d8fdd-w8sck" Mar 19 09:50:46.974850 master-0 kubenswrapper[27819]: I0319 09:50:46.974028 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:47.106833 master-0 kubenswrapper[27819]: I0319 09:50:47.106751 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm8hq\" (UniqueName: \"kubernetes.io/projected/2d91904e-bd5c-4efd-85c9-569efa06f557-kube-api-access-vm8hq\") pod \"2d91904e-bd5c-4efd-85c9-569efa06f557\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " Mar 19 09:50:47.107471 master-0 kubenswrapper[27819]: I0319 09:50:47.107408 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d91904e-bd5c-4efd-85c9-569efa06f557-operator-scripts\") pod \"2d91904e-bd5c-4efd-85c9-569efa06f557\" (UID: \"2d91904e-bd5c-4efd-85c9-569efa06f557\") " Mar 19 09:50:47.110892 master-0 kubenswrapper[27819]: I0319 09:50:47.109962 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d91904e-bd5c-4efd-85c9-569efa06f557-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d91904e-bd5c-4efd-85c9-569efa06f557" (UID: "2d91904e-bd5c-4efd-85c9-569efa06f557"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:47.122713 master-0 kubenswrapper[27819]: I0319 09:50:47.122573 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d91904e-bd5c-4efd-85c9-569efa06f557-kube-api-access-vm8hq" (OuterVolumeSpecName: "kube-api-access-vm8hq") pod "2d91904e-bd5c-4efd-85c9-569efa06f557" (UID: "2d91904e-bd5c-4efd-85c9-569efa06f557"). InnerVolumeSpecName "kube-api-access-vm8hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:47.133749 master-0 kubenswrapper[27819]: I0319 09:50:47.133696 27819 generic.go:334] "Generic (PLEG): container finished" podID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerID="ce484e028154d421d4730fff341621ec63228c22e5e146b1debaf7b1e9d89357" exitCode=143 Mar 19 09:50:47.134229 master-0 kubenswrapper[27819]: I0319 09:50:47.133897 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad","Type":"ContainerDied","Data":"ce484e028154d421d4730fff341621ec63228c22e5e146b1debaf7b1e9d89357"} Mar 19 09:50:47.137666 master-0 kubenswrapper[27819]: I0319 09:50:47.137601 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lt5cm" event={"ID":"2d91904e-bd5c-4efd-85c9-569efa06f557","Type":"ContainerDied","Data":"0aefe288f42d6cf4c37afddefc81c4a3e1fc9d8f64517797d1ea075038fdd48f"} Mar 19 09:50:47.137666 master-0 kubenswrapper[27819]: I0319 09:50:47.137639 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aefe288f42d6cf4c37afddefc81c4a3e1fc9d8f64517797d1ea075038fdd48f" Mar 19 09:50:47.137881 master-0 kubenswrapper[27819]: I0319 09:50:47.137694 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lt5cm" Mar 19 09:50:47.157809 master-0 kubenswrapper[27819]: I0319 09:50:47.156139 27819 generic.go:334] "Generic (PLEG): container finished" podID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerID="773154f727dad2f4223683f37ce3ffd5f94657ca52e7a6ad956b5383cc2eda4e" exitCode=0 Mar 19 09:50:47.157809 master-0 kubenswrapper[27819]: I0319 09:50:47.156194 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795d6cd54b-mpqdp" event={"ID":"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0","Type":"ContainerDied","Data":"773154f727dad2f4223683f37ce3ffd5f94657ca52e7a6ad956b5383cc2eda4e"} Mar 19 09:50:47.213384 master-0 kubenswrapper[27819]: I0319 09:50:47.212179 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm8hq\" (UniqueName: \"kubernetes.io/projected/2d91904e-bd5c-4efd-85c9-569efa06f557-kube-api-access-vm8hq\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:47.213384 master-0 kubenswrapper[27819]: I0319 09:50:47.212241 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d91904e-bd5c-4efd-85c9-569efa06f557-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:51.567999 master-0 kubenswrapper[27819]: I0319 09:50:51.567933 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:51.576780 master-0 kubenswrapper[27819]: I0319 09:50:51.576727 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-997449b9d-czw7t" Mar 19 09:50:53.107634 master-0 kubenswrapper[27819]: I0319 09:50:53.107332 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-ae80b-default-internal-api-0" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.207:9292/healthcheck\": dial tcp 10.128.0.207:9292: connect: connection refused" Mar 19 09:50:53.107634 master-0 kubenswrapper[27819]: I0319 09:50:53.107478 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-ae80b-default-internal-api-0" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.207:9292/healthcheck\": dial tcp 10.128.0.207:9292: connect: connection refused" Mar 19 09:50:53.232719 master-0 kubenswrapper[27819]: I0319 09:50:53.232656 27819 generic.go:334] "Generic (PLEG): container finished" podID="2ccd264e-dca5-4707-9b98-868e25c16500" containerID="d13f6ea63b1ae6ffc84de07f836582898750d3654cc788d090679a0507be935e" exitCode=0 Mar 19 09:50:53.233081 master-0 kubenswrapper[27819]: I0319 09:50:53.232714 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"2ccd264e-dca5-4707-9b98-868e25c16500","Type":"ContainerDied","Data":"d13f6ea63b1ae6ffc84de07f836582898750d3654cc788d090679a0507be935e"} Mar 19 09:50:53.281739 master-0 kubenswrapper[27819]: I0319 09:50:53.281704 27819 scope.go:117] "RemoveContainer" containerID="24f28a32d66d5a2a6eaaa06a44a11dba3938043b3ca6d5ea8bb2dbc0bcb270f0" Mar 19 09:50:54.333987 master-0 kubenswrapper[27819]: I0319 09:50:54.317523 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f4b5cb8b6-kmwr8"] Mar 19 09:50:54.333987 master-0 kubenswrapper[27819]: I0319 09:50:54.317775 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f4b5cb8b6-kmwr8" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-log" containerID="cri-o://ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b" gracePeriod=30 Mar 19 09:50:54.333987 master-0 kubenswrapper[27819]: I0319 09:50:54.318274 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5f4b5cb8b6-kmwr8" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-api" containerID="cri-o://475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de" gracePeriod=30 Mar 19 09:50:54.333987 master-0 kubenswrapper[27819]: I0319 09:50:54.318899 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:54.348692 master-0 kubenswrapper[27819]: I0319 09:50:54.342955 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:54.348692 master-0 kubenswrapper[27819]: I0319 09:50:54.346811 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:54.348692 master-0 kubenswrapper[27819]: I0319 09:50:54.347312 27819 generic.go:334] "Generic (PLEG): container finished" podID="2ccd264e-dca5-4707-9b98-868e25c16500" containerID="cd060a3700040059fc15ffe780dadd9e26169e1931b5f1db2bbdf671a5b7c41f" exitCode=143 Mar 19 09:50:54.348692 master-0 kubenswrapper[27819]: I0319 09:50:54.347369 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"2ccd264e-dca5-4707-9b98-868e25c16500","Type":"ContainerDied","Data":"cd060a3700040059fc15ffe780dadd9e26169e1931b5f1db2bbdf671a5b7c41f"} Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.349916 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5fvkc" event={"ID":"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4","Type":"ContainerDied","Data":"97b1ca44df1b3bd730585bda3ef8bb7be1061c7c0f54daa5b34b70c9275b97f1"} Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.349961 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b1ca44df1b3bd730585bda3ef8bb7be1061c7c0f54daa5b34b70c9275b97f1" Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.351148 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.351403 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-nbsd2" event={"ID":"3fc92a9e-be85-46cb-beef-01cd2ded3c3a","Type":"ContainerDied","Data":"10cb7d6329e9e6c4982a6e0327b4bf6334ff85df8e46bc487a18d4af2ad071c1"} Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.351423 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10cb7d6329e9e6c4982a6e0327b4bf6334ff85df8e46bc487a18d4af2ad071c1" Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.351461 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-nbsd2" Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.355844 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-b706-account-create-update-xlr8d" event={"ID":"4dde7f0b-6f7c-461d-9749-66777abb0610","Type":"ContainerDied","Data":"c64420a2759833d38517d0b34e09b8b93b99a4d2e9289144cff8c42aa6addb64"} Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.355865 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c64420a2759833d38517d0b34e09b8b93b99a4d2e9289144cff8c42aa6addb64" Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.355896 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-b706-account-create-update-xlr8d" Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.357867 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-wp7cf" event={"ID":"6f01e2e4-dcb6-4524-bf81-076a2768309d","Type":"ContainerDied","Data":"d79df9bf529f9c6436a2915e431c3ec56275ca3153968405c36812544fb0bcae"} Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.357896 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79df9bf529f9c6436a2915e431c3ec56275ca3153968405c36812544fb0bcae" Mar 19 09:50:54.358702 master-0 kubenswrapper[27819]: I0319 09:50:54.357933 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-wp7cf" Mar 19 09:50:54.360248 master-0 kubenswrapper[27819]: I0319 09:50:54.359997 27819 generic.go:334] "Generic (PLEG): container finished" podID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerID="9333f4d16522e3ef2410a1d2292df7fa579e47ba4d510e97031b4124411d5eb1" exitCode=0 Mar 19 09:50:54.360248 master-0 kubenswrapper[27819]: I0319 09:50:54.360049 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad","Type":"ContainerDied","Data":"9333f4d16522e3ef2410a1d2292df7fa579e47ba4d510e97031b4124411d5eb1"} Mar 19 09:50:54.375837 master-0 kubenswrapper[27819]: I0319 09:50:54.375778 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" event={"ID":"c4e17be3-0a3b-485f-8259-6f2b66f275a6","Type":"ContainerDied","Data":"c00393c9c13dc17501c78b53cf9cb6f7c9a3c91df93fb5a642f51faf083f0ca8"} Mar 19 09:50:54.376574 master-0 kubenswrapper[27819]: I0319 09:50:54.376539 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00393c9c13dc17501c78b53cf9cb6f7c9a3c91df93fb5a642f51faf083f0ca8" Mar 19 09:50:54.378080 master-0 kubenswrapper[27819]: I0319 09:50:54.378065 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395491 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-795d6cd54b-mpqdp" event={"ID":"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0","Type":"ContainerDied","Data":"eef23d2e93fecc0d92525cbc3a12e883894b312e42f8674c6084b4c09c800e22"} Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395544 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef23d2e93fecc0d92525cbc3a12e883894b312e42f8674c6084b4c09c800e22" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395653 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-operator-scripts\") pod \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395740 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w2hh\" (UniqueName: \"kubernetes.io/projected/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-kube-api-access-9w2hh\") pod \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395760 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-combined-ca-bundle\") pod \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395801 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic\") pod \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395846 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5kf2\" (UniqueName: \"kubernetes.io/projected/6f01e2e4-dcb6-4524-bf81-076a2768309d-kube-api-access-m5kf2\") pod \"6f01e2e4-dcb6-4524-bf81-076a2768309d\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395957 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-config\") pod \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.395987 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dde7f0b-6f7c-461d-9749-66777abb0610-operator-scripts\") pod \"4dde7f0b-6f7c-461d-9749-66777abb0610\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.396022 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-etc-podinfo\") pod \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.396046 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f01e2e4-dcb6-4524-bf81-076a2768309d-operator-scripts\") pod \"6f01e2e4-dcb6-4524-bf81-076a2768309d\" (UID: \"6f01e2e4-dcb6-4524-bf81-076a2768309d\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.396071 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-scripts\") pod \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.396149 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nttln\" (UniqueName: \"kubernetes.io/projected/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-kube-api-access-nttln\") pod \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\" (UID: \"c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.396242 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\" (UID: \"3fc92a9e-be85-46cb-beef-01cd2ded3c3a\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.396280 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twtw6\" (UniqueName: \"kubernetes.io/projected/4dde7f0b-6f7c-461d-9749-66777abb0610-kube-api-access-twtw6\") pod \"4dde7f0b-6f7c-461d-9749-66777abb0610\" (UID: \"4dde7f0b-6f7c-461d-9749-66777abb0610\") " Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.396990 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" event={"ID":"2828d124-ef3e-4f24-89ab-4eca7d22c966","Type":"ContainerDied","Data":"e055ee0af1183f03b7337ed49dcf9d1812bcfb1f7bcf75c98665779aab32b0f1"} Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.397009 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e055ee0af1183f03b7337ed49dcf9d1812bcfb1f7bcf75c98665779aab32b0f1" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.397061 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c2c5-account-create-update-slm57" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.397905 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4" (UID: "c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.398525 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "3fc92a9e-be85-46cb-beef-01cd2ded3c3a" (UID: "3fc92a9e-be85-46cb-beef-01cd2ded3c3a"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.398913 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f01e2e4-dcb6-4524-bf81-076a2768309d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f01e2e4-dcb6-4524-bf81-076a2768309d" (UID: "6f01e2e4-dcb6-4524-bf81-076a2768309d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.403656 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "3fc92a9e-be85-46cb-beef-01cd2ded3c3a" (UID: "3fc92a9e-be85-46cb-beef-01cd2ded3c3a"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.410788 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dde7f0b-6f7c-461d-9749-66777abb0610-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4dde7f0b-6f7c-461d-9749-66777abb0610" (UID: "4dde7f0b-6f7c-461d-9749-66777abb0610"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.414413 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f01e2e4-dcb6-4524-bf81-076a2768309d-kube-api-access-m5kf2" (OuterVolumeSpecName: "kube-api-access-m5kf2") pod "6f01e2e4-dcb6-4524-bf81-076a2768309d" (UID: "6f01e2e4-dcb6-4524-bf81-076a2768309d"). InnerVolumeSpecName "kube-api-access-m5kf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:54.425036 master-0 kubenswrapper[27819]: I0319 09:50:54.423794 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-kube-api-access-nttln" (OuterVolumeSpecName: "kube-api-access-nttln") pod "c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4" (UID: "c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4"). InnerVolumeSpecName "kube-api-access-nttln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:54.434728 master-0 kubenswrapper[27819]: I0319 09:50:54.434637 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-kube-api-access-9w2hh" (OuterVolumeSpecName: "kube-api-access-9w2hh") pod "3fc92a9e-be85-46cb-beef-01cd2ded3c3a" (UID: "3fc92a9e-be85-46cb-beef-01cd2ded3c3a"). InnerVolumeSpecName "kube-api-access-9w2hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:54.435370 master-0 kubenswrapper[27819]: I0319 09:50:54.435317 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "3fc92a9e-be85-46cb-beef-01cd2ded3c3a" (UID: "3fc92a9e-be85-46cb-beef-01cd2ded3c3a"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 09:50:54.435370 master-0 kubenswrapper[27819]: I0319 09:50:54.435335 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-scripts" (OuterVolumeSpecName: "scripts") pod "3fc92a9e-be85-46cb-beef-01cd2ded3c3a" (UID: "3fc92a9e-be85-46cb-beef-01cd2ded3c3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:54.435851 master-0 kubenswrapper[27819]: I0319 09:50:54.435783 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dde7f0b-6f7c-461d-9749-66777abb0610-kube-api-access-twtw6" (OuterVolumeSpecName: "kube-api-access-twtw6") pod "4dde7f0b-6f7c-461d-9749-66777abb0610" (UID: "4dde7f0b-6f7c-461d-9749-66777abb0610"). InnerVolumeSpecName "kube-api-access-twtw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:54.485600 master-0 kubenswrapper[27819]: I0319 09:50:54.459174 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:54.486173 master-0 kubenswrapper[27819]: I0319 09:50:54.486106 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-config" (OuterVolumeSpecName: "config") pod "3fc92a9e-be85-46cb-beef-01cd2ded3c3a" (UID: "3fc92a9e-be85-46cb-beef-01cd2ded3c3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.499091 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d124-ef3e-4f24-89ab-4eca7d22c966-operator-scripts\") pod \"2828d124-ef3e-4f24-89ab-4eca7d22c966\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.499155 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb8wq\" (UniqueName: \"kubernetes.io/projected/2828d124-ef3e-4f24-89ab-4eca7d22c966-kube-api-access-rb8wq\") pod \"2828d124-ef3e-4f24-89ab-4eca7d22c966\" (UID: \"2828d124-ef3e-4f24-89ab-4eca7d22c966\") " Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.499334 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tfrd\" (UniqueName: \"kubernetes.io/projected/c4e17be3-0a3b-485f-8259-6f2b66f275a6-kube-api-access-6tfrd\") pod \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.499468 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4e17be3-0a3b-485f-8259-6f2b66f275a6-operator-scripts\") pod \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\" (UID: \"c4e17be3-0a3b-485f-8259-6f2b66f275a6\") " Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500555 27819 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500589 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5kf2\" (UniqueName: \"kubernetes.io/projected/6f01e2e4-dcb6-4524-bf81-076a2768309d-kube-api-access-m5kf2\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500601 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500613 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dde7f0b-6f7c-461d-9749-66777abb0610-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500621 27819 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500629 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f01e2e4-dcb6-4524-bf81-076a2768309d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500638 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500647 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nttln\" (UniqueName: \"kubernetes.io/projected/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-kube-api-access-nttln\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500656 27819 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500666 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twtw6\" (UniqueName: \"kubernetes.io/projected/4dde7f0b-6f7c-461d-9749-66777abb0610-kube-api-access-twtw6\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500676 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.500685 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w2hh\" (UniqueName: \"kubernetes.io/projected/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-kube-api-access-9w2hh\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.501282 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4e17be3-0a3b-485f-8259-6f2b66f275a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4e17be3-0a3b-485f-8259-6f2b66f275a6" (UID: "c4e17be3-0a3b-485f-8259-6f2b66f275a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.501906 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2828d124-ef3e-4f24-89ab-4eca7d22c966-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2828d124-ef3e-4f24-89ab-4eca7d22c966" (UID: "2828d124-ef3e-4f24-89ab-4eca7d22c966"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.517902 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e17be3-0a3b-485f-8259-6f2b66f275a6-kube-api-access-6tfrd" (OuterVolumeSpecName: "kube-api-access-6tfrd") pod "c4e17be3-0a3b-485f-8259-6f2b66f275a6" (UID: "c4e17be3-0a3b-485f-8259-6f2b66f275a6"). InnerVolumeSpecName "kube-api-access-6tfrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.521077 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2828d124-ef3e-4f24-89ab-4eca7d22c966-kube-api-access-rb8wq" (OuterVolumeSpecName: "kube-api-access-rb8wq") pod "2828d124-ef3e-4f24-89ab-4eca7d22c966" (UID: "2828d124-ef3e-4f24-89ab-4eca7d22c966"). InnerVolumeSpecName "kube-api-access-rb8wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:54.532813 master-0 kubenswrapper[27819]: I0319 09:50:54.521439 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fc92a9e-be85-46cb-beef-01cd2ded3c3a" (UID: "3fc92a9e-be85-46cb-beef-01cd2ded3c3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:54.590210 master-0 kubenswrapper[27819]: I0319 09:50:54.551685 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.604910 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-ovndb-tls-certs\") pod \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.605008 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-combined-ca-bundle\") pod \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.605085 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-config\") pod \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.605112 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-httpd-config\") pod \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.605823 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7wsc\" (UniqueName: \"kubernetes.io/projected/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-kube-api-access-f7wsc\") pod \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\" (UID: \"1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0\") " Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.606717 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tfrd\" (UniqueName: \"kubernetes.io/projected/c4e17be3-0a3b-485f-8259-6f2b66f275a6-kube-api-access-6tfrd\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.606741 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fc92a9e-be85-46cb-beef-01cd2ded3c3a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.606754 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4e17be3-0a3b-485f-8259-6f2b66f275a6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.606767 27819 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2828d124-ef3e-4f24-89ab-4eca7d22c966-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.607194 master-0 kubenswrapper[27819]: I0319 09:50:54.606780 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb8wq\" (UniqueName: \"kubernetes.io/projected/2828d124-ef3e-4f24-89ab-4eca7d22c966-kube-api-access-rb8wq\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.619649 master-0 kubenswrapper[27819]: I0319 09:50:54.611253 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" (UID: "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:54.619649 master-0 kubenswrapper[27819]: I0319 09:50:54.611419 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-kube-api-access-f7wsc" (OuterVolumeSpecName: "kube-api-access-f7wsc") pod "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" (UID: "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0"). InnerVolumeSpecName "kube-api-access-f7wsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:54.710891 master-0 kubenswrapper[27819]: I0319 09:50:54.709375 27819 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.710891 master-0 kubenswrapper[27819]: I0319 09:50:54.709418 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7wsc\" (UniqueName: \"kubernetes.io/projected/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-kube-api-access-f7wsc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.778599 master-0 kubenswrapper[27819]: I0319 09:50:54.775762 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-config" (OuterVolumeSpecName: "config") pod "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" (UID: "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:54.781021 master-0 kubenswrapper[27819]: I0319 09:50:54.780130 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" (UID: "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:54.816594 master-0 kubenswrapper[27819]: I0319 09:50:54.813720 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.816594 master-0 kubenswrapper[27819]: I0319 09:50:54.813768 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.841594 master-0 kubenswrapper[27819]: I0319 09:50:54.837096 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" (UID: "1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:54.916749 master-0 kubenswrapper[27819]: I0319 09:50:54.916525 27819 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:54.946545 master-0 kubenswrapper[27819]: I0319 09:50:54.946500 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:55.018310 master-0 kubenswrapper[27819]: I0319 09:50:55.018190 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-config-data\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.018310 master-0 kubenswrapper[27819]: I0319 09:50:55.018283 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr5kh\" (UniqueName: \"kubernetes.io/projected/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-kube-api-access-wr5kh\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.018436 master-0 kubenswrapper[27819]: I0319 09:50:55.018401 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-httpd-run\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.018467 master-0 kubenswrapper[27819]: I0319 09:50:55.018438 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-internal-tls-certs\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.020598 master-0 kubenswrapper[27819]: I0319 09:50:55.018592 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-scripts\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.020598 master-0 kubenswrapper[27819]: I0319 09:50:55.020033 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.020598 master-0 kubenswrapper[27819]: I0319 09:50:55.020102 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-combined-ca-bundle\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.020598 master-0 kubenswrapper[27819]: I0319 09:50:55.020146 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-logs\") pod \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\" (UID: \"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad\") " Mar 19 09:50:55.022932 master-0 kubenswrapper[27819]: I0319 09:50:55.021392 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-logs" (OuterVolumeSpecName: "logs") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:55.022932 master-0 kubenswrapper[27819]: I0319 09:50:55.021669 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:55.024607 master-0 kubenswrapper[27819]: I0319 09:50:55.024204 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-kube-api-access-wr5kh" (OuterVolumeSpecName: "kube-api-access-wr5kh") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "kube-api-access-wr5kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:55.035662 master-0 kubenswrapper[27819]: I0319 09:50:55.034698 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-scripts" (OuterVolumeSpecName: "scripts") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.043658 master-0 kubenswrapper[27819]: I0319 09:50:55.043613 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8" (OuterVolumeSpecName: "glance") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 09:50:55.123896 master-0 kubenswrapper[27819]: I0319 09:50:55.123777 27819 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.123896 master-0 kubenswrapper[27819]: I0319 09:50:55.123815 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.123896 master-0 kubenswrapper[27819]: I0319 09:50:55.123847 27819 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") on node \"master-0\" " Mar 19 09:50:55.123896 master-0 kubenswrapper[27819]: I0319 09:50:55.123857 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.123896 master-0 kubenswrapper[27819]: I0319 09:50:55.123869 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr5kh\" (UniqueName: \"kubernetes.io/projected/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-kube-api-access-wr5kh\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.124231 master-0 kubenswrapper[27819]: I0319 09:50:55.124133 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-config-data" (OuterVolumeSpecName: "config-data") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.156731 master-0 kubenswrapper[27819]: I0319 09:50:55.155958 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.157165 master-0 kubenswrapper[27819]: I0319 09:50:55.157105 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-ae80b-default-external-api-0" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.208:9292/healthcheck\": dial tcp 10.128.0.208:9292: connect: connection refused" Mar 19 09:50:55.157808 master-0 kubenswrapper[27819]: I0319 09:50:55.157372 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-ae80b-default-external-api-0" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.208:9292/healthcheck\": dial tcp 10.128.0.208:9292: connect: connection refused" Mar 19 09:50:55.217357 master-0 kubenswrapper[27819]: I0319 09:50:55.216505 27819 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 09:50:55.217357 master-0 kubenswrapper[27819]: I0319 09:50:55.216760 27819 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e" (UniqueName: "kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8") on node "master-0" Mar 19 09:50:55.219995 master-0 kubenswrapper[27819]: I0319 09:50:55.218842 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" (UID: "a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.237290 master-0 kubenswrapper[27819]: I0319 09:50:55.232045 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.237290 master-0 kubenswrapper[27819]: I0319 09:50:55.232085 27819 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.237290 master-0 kubenswrapper[27819]: I0319 09:50:55.232096 27819 reconciler_common.go:293] "Volume detached for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.237290 master-0 kubenswrapper[27819]: I0319 09:50:55.232105 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.429517 master-0 kubenswrapper[27819]: I0319 09:50:55.428926 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerStarted","Data":"0ef54928f4871f2bcc441b2849549251c59405ee7f5d818158cfcd082f338837"} Mar 19 09:50:55.433211 master-0 kubenswrapper[27819]: I0319 09:50:55.433160 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"efcb8964-4111-4878-bdb8-6b6ae1be884f","Type":"ContainerStarted","Data":"276cda0ec76d2e7863ebbfc64e0c42503eb7b076ef8dbb29e5cb017df688f050"} Mar 19 09:50:55.454844 master-0 kubenswrapper[27819]: I0319 09:50:55.454759 27819 generic.go:334] "Generic (PLEG): container finished" podID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerID="ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b" exitCode=143 Mar 19 09:50:55.455037 master-0 kubenswrapper[27819]: I0319 09:50:55.454934 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f4b5cb8b6-kmwr8" event={"ID":"9213d9a0-94b1-431b-8116-8fafc2a636cf","Type":"ContainerDied","Data":"ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b"} Mar 19 09:50:55.458241 master-0 kubenswrapper[27819]: I0319 09:50:55.458183 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad","Type":"ContainerDied","Data":"5ddcbb0158ecb1999ebf474040f3019a06b8271050b80f7461441fe3bef06f55"} Mar 19 09:50:55.458545 master-0 kubenswrapper[27819]: I0319 09:50:55.458213 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:55.458545 master-0 kubenswrapper[27819]: I0319 09:50:55.458255 27819 scope.go:117] "RemoveContainer" containerID="9333f4d16522e3ef2410a1d2292df7fa579e47ba4d510e97031b4124411d5eb1" Mar 19 09:50:55.462050 master-0 kubenswrapper[27819]: I0319 09:50:55.461960 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-795d6cd54b-mpqdp" Mar 19 09:50:55.462831 master-0 kubenswrapper[27819]: I0319 09:50:55.462811 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ddf5-account-create-update-sl76f" Mar 19 09:50:55.463332 master-0 kubenswrapper[27819]: I0319 09:50:55.462885 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5fvkc" Mar 19 09:50:55.463814 master-0 kubenswrapper[27819]: I0319 09:50:55.462899 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" event={"ID":"4878f70b-b6db-4fbf-969d-3bb08df3d2bf","Type":"ContainerStarted","Data":"933fd5b6600690361fe73a1630347fc7587207768227400c55155bbe05182c70"} Mar 19 09:50:55.466018 master-0 kubenswrapper[27819]: I0319 09:50:55.465976 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:55.543812 master-0 kubenswrapper[27819]: I0319 09:50:55.543727 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.977480268 podStartE2EDuration="28.543701771s" podCreationTimestamp="2026-03-19 09:50:27 +0000 UTC" firstStartedPulling="2026-03-19 09:50:30.839104949 +0000 UTC m=+1015.760682641" lastFinishedPulling="2026-03-19 09:50:54.405326452 +0000 UTC m=+1039.326904144" observedRunningTime="2026-03-19 09:50:55.494365393 +0000 UTC m=+1040.415943085" watchObservedRunningTime="2026-03-19 09:50:55.543701771 +0000 UTC m=+1040.465279463" Mar 19 09:50:55.617690 master-0 kubenswrapper[27819]: I0319 09:50:55.616933 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:55.634393 master-0 kubenswrapper[27819]: I0319 09:50:55.633377 27819 scope.go:117] "RemoveContainer" containerID="ce484e028154d421d4730fff341621ec63228c22e5e146b1debaf7b1e9d89357" Mar 19 09:50:55.651717 master-0 kubenswrapper[27819]: I0319 09:50:55.649756 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:50:55.657100 master-0 kubenswrapper[27819]: I0319 09:50:55.657042 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:50:55.658159 master-0 kubenswrapper[27819]: I0319 09:50:55.658130 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-httpd-run\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.658329 master-0 kubenswrapper[27819]: I0319 09:50:55.658309 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-config-data\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.658460 master-0 kubenswrapper[27819]: I0319 09:50:55.658440 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-scripts\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.659097 master-0 kubenswrapper[27819]: I0319 09:50:55.659074 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:55.659748 master-0 kubenswrapper[27819]: I0319 09:50:55.659724 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.659910 master-0 kubenswrapper[27819]: I0319 09:50:55.659888 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjwdb\" (UniqueName: \"kubernetes.io/projected/2ccd264e-dca5-4707-9b98-868e25c16500-kube-api-access-vjwdb\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.660128 master-0 kubenswrapper[27819]: I0319 09:50:55.660106 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-logs\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.660236 master-0 kubenswrapper[27819]: I0319 09:50:55.660218 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-combined-ca-bundle\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.660393 master-0 kubenswrapper[27819]: I0319 09:50:55.660373 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-public-tls-certs\") pod \"2ccd264e-dca5-4707-9b98-868e25c16500\" (UID: \"2ccd264e-dca5-4707-9b98-868e25c16500\") " Mar 19 09:50:55.665634 master-0 kubenswrapper[27819]: I0319 09:50:55.662296 27819 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.666237 master-0 kubenswrapper[27819]: I0319 09:50:55.666186 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-logs" (OuterVolumeSpecName: "logs") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:55.672890 master-0 kubenswrapper[27819]: I0319 09:50:55.672831 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ccd264e-dca5-4707-9b98-868e25c16500-kube-api-access-vjwdb" (OuterVolumeSpecName: "kube-api-access-vjwdb") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "kube-api-access-vjwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:55.674829 master-0 kubenswrapper[27819]: I0319 09:50:55.673186 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-scripts" (OuterVolumeSpecName: "scripts") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.720793 master-0 kubenswrapper[27819]: I0319 09:50:55.717628 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81" (OuterVolumeSpecName: "glance") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 19 09:50:55.725423 master-0 kubenswrapper[27819]: I0319 09:50:55.725371 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-795d6cd54b-mpqdp"] Mar 19 09:50:55.754088 master-0 kubenswrapper[27819]: I0319 09:50:55.754030 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.760794 master-0 kubenswrapper[27819]: I0319 09:50:55.760728 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.761013 master-0 kubenswrapper[27819]: I0319 09:50:55.760779 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-config-data" (OuterVolumeSpecName: "config-data") pod "2ccd264e-dca5-4707-9b98-868e25c16500" (UID: "2ccd264e-dca5-4707-9b98-868e25c16500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:55.764241 master-0 kubenswrapper[27819]: I0319 09:50:55.764187 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.764241 master-0 kubenswrapper[27819]: I0319 09:50:55.764238 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.764491 master-0 kubenswrapper[27819]: I0319 09:50:55.764282 27819 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") on node \"master-0\" " Mar 19 09:50:55.764491 master-0 kubenswrapper[27819]: I0319 09:50:55.764302 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjwdb\" (UniqueName: \"kubernetes.io/projected/2ccd264e-dca5-4707-9b98-868e25c16500-kube-api-access-vjwdb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.764491 master-0 kubenswrapper[27819]: I0319 09:50:55.764317 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ccd264e-dca5-4707-9b98-868e25c16500-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.764491 master-0 kubenswrapper[27819]: I0319 09:50:55.764328 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.764491 master-0 kubenswrapper[27819]: I0319 09:50:55.764339 27819 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ccd264e-dca5-4707-9b98-868e25c16500-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.783664 master-0 kubenswrapper[27819]: I0319 09:50:55.783611 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-795d6cd54b-mpqdp"] Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.909604 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910062 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910076 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910088 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910095 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910110 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d91904e-bd5c-4efd-85c9-569efa06f557" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910117 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d91904e-bd5c-4efd-85c9-569efa06f557" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910133 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-log" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910139 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-log" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910149 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f01e2e4-dcb6-4524-bf81-076a2768309d" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910155 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f01e2e4-dcb6-4524-bf81-076a2768309d" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910167 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910173 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910189 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-log" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910195 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-log" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910213 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2828d124-ef3e-4f24-89ab-4eca7d22c966" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910219 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2828d124-ef3e-4f24-89ab-4eca7d22c966" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910229 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-api" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910235 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-api" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910254 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e17be3-0a3b-485f-8259-6f2b66f275a6" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910260 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e17be3-0a3b-485f-8259-6f2b66f275a6" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910279 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910285 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910299 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc92a9e-be85-46cb-beef-01cd2ded3c3a" containerName="ironic-inspector-db-sync" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910305 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc92a9e-be85-46cb-beef-01cd2ded3c3a" containerName="ironic-inspector-db-sync" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: E0319 09:50:55.910323 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dde7f0b-6f7c-461d-9749-66777abb0610" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910329 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dde7f0b-6f7c-461d-9749-66777abb0610" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910513 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-api" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910526 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e17be3-0a3b-485f-8259-6f2b66f275a6" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910543 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910576 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc92a9e-be85-46cb-beef-01cd2ded3c3a" containerName="ironic-inspector-db-sync" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910589 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-log" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910605 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2828d124-ef3e-4f24-89ab-4eca7d22c966" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910623 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dde7f0b-6f7c-461d-9749-66777abb0610" containerName="mariadb-account-create-update" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910642 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" containerName="glance-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910656 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d91904e-bd5c-4efd-85c9-569efa06f557" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910666 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f01e2e4-dcb6-4524-bf81-076a2768309d" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910675 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-log" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910686 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4" containerName="mariadb-database-create" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.910695 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" containerName="glance-httpd" Mar 19 09:50:55.914078 master-0 kubenswrapper[27819]: I0319 09:50:55.911866 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:55.933592 master-0 kubenswrapper[27819]: I0319 09:50:55.932279 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-ae80b-default-internal-config-data" Mar 19 09:50:55.933592 master-0 kubenswrapper[27819]: I0319 09:50:55.932503 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 19 09:50:55.950285 master-0 kubenswrapper[27819]: I0319 09:50:55.950246 27819 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 19 09:50:55.950711 master-0 kubenswrapper[27819]: I0319 09:50:55.950694 27819 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e" (UniqueName: "kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81") on node "master-0" Mar 19 09:50:55.974807 master-0 kubenswrapper[27819]: I0319 09:50:55.973045 27819 reconciler_common.go:293] "Volume detached for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:55.987664 master-0 kubenswrapper[27819]: I0319 09:50:55.987620 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076070 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076179 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-scripts\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076210 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-config-data\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076244 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-internal-tls-certs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076276 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c500ca-9291-43ed-9fa7-91debd8b6289-logs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076323 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82c500ca-9291-43ed-9fa7-91debd8b6289-httpd-run\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076427 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbqs\" (UniqueName: \"kubernetes.io/projected/82c500ca-9291-43ed-9fa7-91debd8b6289-kube-api-access-mlbqs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.083117 master-0 kubenswrapper[27819]: I0319 09:50:56.076477 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-combined-ca-bundle\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.180415 master-0 kubenswrapper[27819]: I0319 09:50:56.179680 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.180415 master-0 kubenswrapper[27819]: I0319 09:50:56.180160 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-scripts\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.180415 master-0 kubenswrapper[27819]: I0319 09:50:56.180231 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-config-data\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.180415 master-0 kubenswrapper[27819]: I0319 09:50:56.180279 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-internal-tls-certs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.180936 master-0 kubenswrapper[27819]: I0319 09:50:56.180896 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c500ca-9291-43ed-9fa7-91debd8b6289-logs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.181038 master-0 kubenswrapper[27819]: I0319 09:50:56.180994 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82c500ca-9291-43ed-9fa7-91debd8b6289-httpd-run\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.181154 master-0 kubenswrapper[27819]: I0319 09:50:56.181090 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbqs\" (UniqueName: \"kubernetes.io/projected/82c500ca-9291-43ed-9fa7-91debd8b6289-kube-api-access-mlbqs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.181892 master-0 kubenswrapper[27819]: I0319 09:50:56.181223 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-combined-ca-bundle\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.181892 master-0 kubenswrapper[27819]: I0319 09:50:56.181847 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82c500ca-9291-43ed-9fa7-91debd8b6289-logs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.183502 master-0 kubenswrapper[27819]: I0319 09:50:56.182006 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/82c500ca-9291-43ed-9fa7-91debd8b6289-httpd-run\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.185500 master-0 kubenswrapper[27819]: I0319 09:50:56.184494 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-combined-ca-bundle\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.185500 master-0 kubenswrapper[27819]: I0319 09:50:56.184535 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-internal-tls-certs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.185500 master-0 kubenswrapper[27819]: I0319 09:50:56.185150 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:50:56.185500 master-0 kubenswrapper[27819]: I0319 09:50:56.185173 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2c67bbd9bf8089e21ab79a2f4175808b6bfebe7eda66c90638541094af90db59/globalmount\"" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.185864 master-0 kubenswrapper[27819]: I0319 09:50:56.185832 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-scripts\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.192588 master-0 kubenswrapper[27819]: I0319 09:50:56.191352 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82c500ca-9291-43ed-9fa7-91debd8b6289-config-data\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.207563 master-0 kubenswrapper[27819]: I0319 09:50:56.207486 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbqs\" (UniqueName: \"kubernetes.io/projected/82c500ca-9291-43ed-9fa7-91debd8b6289-kube-api-access-mlbqs\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:56.641305 master-0 kubenswrapper[27819]: I0319 09:50:56.641009 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"2ccd264e-dca5-4707-9b98-868e25c16500","Type":"ContainerDied","Data":"9f9e2bad0af9d6208d654f034bb2c834a9fd6e7bac4205389ab88e706d3c2c1f"} Mar 19 09:50:56.641305 master-0 kubenswrapper[27819]: I0319 09:50:56.641082 27819 scope.go:117] "RemoveContainer" containerID="d13f6ea63b1ae6ffc84de07f836582898750d3654cc788d090679a0507be935e" Mar 19 09:50:56.643889 master-0 kubenswrapper[27819]: I0319 09:50:56.641342 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:56.654820 master-0 kubenswrapper[27819]: I0319 09:50:56.653337 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5df5bc68f9-s4qw5"] Mar 19 09:50:56.681278 master-0 kubenswrapper[27819]: I0319 09:50:56.676314 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.749659 master-0 kubenswrapper[27819]: I0319 09:50:56.749386 27819 scope.go:117] "RemoveContainer" containerID="cd060a3700040059fc15ffe780dadd9e26169e1931b5f1db2bbdf671a5b7c41f" Mar 19 09:50:56.768995 master-0 kubenswrapper[27819]: I0319 09:50:56.767753 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5df5bc68f9-s4qw5"] Mar 19 09:50:56.808752 master-0 kubenswrapper[27819]: I0319 09:50:56.808696 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:50:56.819597 master-0 kubenswrapper[27819]: I0319 09:50:56.815373 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 19 09:50:56.819597 master-0 kubenswrapper[27819]: I0319 09:50:56.818037 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 19 09:50:56.819597 master-0 kubenswrapper[27819]: I0319 09:50:56.818479 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 19 09:50:56.819597 master-0 kubenswrapper[27819]: I0319 09:50:56.818824 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 19 09:50:56.833733 master-0 kubenswrapper[27819]: I0319 09:50:56.833682 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-config\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.833944 master-0 kubenswrapper[27819]: I0319 09:50:56.833875 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-nb\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.834392 master-0 kubenswrapper[27819]: I0319 09:50:56.834358 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4fbg\" (UniqueName: \"kubernetes.io/projected/492322b8-ccb0-4440-b0f4-8d43bbd889e0-kube-api-access-b4fbg\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.834510 master-0 kubenswrapper[27819]: I0319 09:50:56.834488 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-svc\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.834725 master-0 kubenswrapper[27819]: I0319 09:50:56.834704 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-sb\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.834791 master-0 kubenswrapper[27819]: I0319 09:50:56.834771 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-swift-storage-0\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.838242 master-0 kubenswrapper[27819]: I0319 09:50:56.837656 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:50:56.870016 master-0 kubenswrapper[27819]: I0319 09:50:56.867435 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:50:56.895921 master-0 kubenswrapper[27819]: I0319 09:50:56.883504 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:50:56.924732 master-0 kubenswrapper[27819]: I0319 09:50:56.924639 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:50:56.927083 master-0 kubenswrapper[27819]: I0319 09:50:56.927029 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:56.932611 master-0 kubenswrapper[27819]: I0319 09:50:56.932564 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-ae80b-default-external-config-data" Mar 19 09:50:56.932939 master-0 kubenswrapper[27819]: I0319 09:50:56.932915 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 19 09:50:56.937032 master-0 kubenswrapper[27819]: I0319 09:50:56.936985 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-sb\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.937168 master-0 kubenswrapper[27819]: I0319 09:50:56.937057 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-swift-storage-0\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.937168 master-0 kubenswrapper[27819]: I0319 09:50:56.937105 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:56.937259 master-0 kubenswrapper[27819]: I0319 09:50:56.937169 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-config\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.937259 master-0 kubenswrapper[27819]: I0319 09:50:56.937196 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:56.937319 master-0 kubenswrapper[27819]: I0319 09:50:56.937272 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-scripts\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:56.937319 master-0 kubenswrapper[27819]: I0319 09:50:56.937291 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpb5\" (UniqueName: \"kubernetes.io/projected/6c02c97c-cf93-44c0-9399-43e7885bfd70-kube-api-access-vkpb5\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:56.937385 master-0 kubenswrapper[27819]: I0319 09:50:56.937325 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-nb\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.937385 master-0 kubenswrapper[27819]: I0319 09:50:56.937346 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-config\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:56.937440 master-0 kubenswrapper[27819]: I0319 09:50:56.937394 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4fbg\" (UniqueName: \"kubernetes.io/projected/492322b8-ccb0-4440-b0f4-8d43bbd889e0-kube-api-access-b4fbg\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.937440 master-0 kubenswrapper[27819]: I0319 09:50:56.937429 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:56.937500 master-0 kubenswrapper[27819]: I0319 09:50:56.937477 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-svc\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.937568 master-0 kubenswrapper[27819]: I0319 09:50:56.937538 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6c02c97c-cf93-44c0-9399-43e7885bfd70-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:56.938754 master-0 kubenswrapper[27819]: I0319 09:50:56.938733 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-sb\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.938950 master-0 kubenswrapper[27819]: I0319 09:50:56.938906 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-nb\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.939129 master-0 kubenswrapper[27819]: I0319 09:50:56.938947 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-swift-storage-0\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.939757 master-0 kubenswrapper[27819]: I0319 09:50:56.939421 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-config\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.940327 master-0 kubenswrapper[27819]: I0319 09:50:56.940148 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-svc\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:56.955445 master-0 kubenswrapper[27819]: I0319 09:50:56.955412 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:50:56.956585 master-0 kubenswrapper[27819]: I0319 09:50:56.956452 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4fbg\" (UniqueName: \"kubernetes.io/projected/492322b8-ccb0-4440-b0f4-8d43bbd889e0-kube-api-access-b4fbg\") pod \"dnsmasq-dns-5df5bc68f9-s4qw5\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:57.006221 master-0 kubenswrapper[27819]: I0319 09:50:57.006090 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.039743 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.039825 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv2jh\" (UniqueName: \"kubernetes.io/projected/cce27e6f-3c8a-4763-9862-87419f46e912-kube-api-access-wv2jh\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.039858 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.039880 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6c02c97c-cf93-44c0-9399-43e7885bfd70-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.039914 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.039934 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cce27e6f-3c8a-4763-9862-87419f46e912-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.039984 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040006 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040027 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce27e6f-3c8a-4763-9862-87419f46e912-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040066 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040098 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040140 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-scripts\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040157 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpb5\" (UniqueName: \"kubernetes.io/projected/6c02c97c-cf93-44c0-9399-43e7885bfd70-kube-api-access-vkpb5\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040176 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.040220 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-config\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.042988 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.043403 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.043429 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.048772 master-0 kubenswrapper[27819]: I0319 09:50:57.046595 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-scripts\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.054568 master-0 kubenswrapper[27819]: I0319 09:50:57.051294 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6c02c97c-cf93-44c0-9399-43e7885bfd70-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.054568 master-0 kubenswrapper[27819]: I0319 09:50:57.052379 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-config\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.074256 master-0 kubenswrapper[27819]: I0319 09:50:57.064787 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpb5\" (UniqueName: \"kubernetes.io/projected/6c02c97c-cf93-44c0-9399-43e7885bfd70-kube-api-access-vkpb5\") pod \"ironic-inspector-0\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " pod="openstack/ironic-inspector-0" Mar 19 09:50:57.132817 master-0 kubenswrapper[27819]: I0319 09:50:57.132692 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7b83e5a0-4595-4d1d-9c8d-a87666c8505e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^781aba5d-29bf-4754-9bb6-eb57c92bbcc8\") pod \"glance-ae80b-default-internal-api-0\" (UID: \"82c500ca-9291-43ed-9fa7-91debd8b6289\") " pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:57.142452 master-0 kubenswrapper[27819]: I0319 09:50:57.142387 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.142689 master-0 kubenswrapper[27819]: I0319 09:50:57.142615 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv2jh\" (UniqueName: \"kubernetes.io/projected/cce27e6f-3c8a-4763-9862-87419f46e912-kube-api-access-wv2jh\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.142689 master-0 kubenswrapper[27819]: I0319 09:50:57.142665 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.142756 master-0 kubenswrapper[27819]: I0319 09:50:57.142740 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.142790 master-0 kubenswrapper[27819]: I0319 09:50:57.142771 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cce27e6f-3c8a-4763-9862-87419f46e912-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.142880 master-0 kubenswrapper[27819]: I0319 09:50:57.142856 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.142944 master-0 kubenswrapper[27819]: I0319 09:50:57.142893 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce27e6f-3c8a-4763-9862-87419f46e912-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.142980 master-0 kubenswrapper[27819]: I0319 09:50:57.142954 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.147692 master-0 kubenswrapper[27819]: I0319 09:50:57.144716 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cce27e6f-3c8a-4763-9862-87419f46e912-logs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.147692 master-0 kubenswrapper[27819]: I0319 09:50:57.145410 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cce27e6f-3c8a-4763-9862-87419f46e912-httpd-run\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.150304 master-0 kubenswrapper[27819]: I0319 09:50:57.149012 27819 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 19 09:50:57.150304 master-0 kubenswrapper[27819]: I0319 09:50:57.149066 27819 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d8a587592d303f8470fc6f13326b5360e6df71aa3ac25c2d7cd8ffda26d20834/globalmount\"" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.151250 master-0 kubenswrapper[27819]: I0319 09:50:57.151167 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-public-tls-certs\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.152251 master-0 kubenswrapper[27819]: I0319 09:50:57.152208 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-scripts\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.154334 master-0 kubenswrapper[27819]: I0319 09:50:57.154082 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-combined-ca-bundle\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.155154 master-0 kubenswrapper[27819]: I0319 09:50:57.154737 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cce27e6f-3c8a-4763-9862-87419f46e912-config-data\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.170546 master-0 kubenswrapper[27819]: I0319 09:50:57.170505 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 19 09:50:57.173348 master-0 kubenswrapper[27819]: I0319 09:50:57.173316 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv2jh\" (UniqueName: \"kubernetes.io/projected/cce27e6f-3c8a-4763-9862-87419f46e912-kube-api-access-wv2jh\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:57.239964 master-0 kubenswrapper[27819]: I0319 09:50:57.238304 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:50:57.357251 master-0 kubenswrapper[27819]: I0319 09:50:57.354905 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" path="/var/lib/kubelet/pods/1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0/volumes" Mar 19 09:50:57.365145 master-0 kubenswrapper[27819]: I0319 09:50:57.364455 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ccd264e-dca5-4707-9b98-868e25c16500" path="/var/lib/kubelet/pods/2ccd264e-dca5-4707-9b98-868e25c16500/volumes" Mar 19 09:50:57.365821 master-0 kubenswrapper[27819]: I0319 09:50:57.365789 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad" path="/var/lib/kubelet/pods/a6848f7e-7f17-4c6a-ae5d-3dee4ef381ad/volumes" Mar 19 09:50:57.657733 master-0 kubenswrapper[27819]: I0319 09:50:57.651672 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5df5bc68f9-s4qw5"] Mar 19 09:50:57.671650 master-0 kubenswrapper[27819]: W0319 09:50:57.671577 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod492322b8_ccb0_4440_b0f4_8d43bbd889e0.slice/crio-f154f14599ccf059fc6c2f26f658bdda4a5402ad112193112207752f3721a85c WatchSource:0}: Error finding container f154f14599ccf059fc6c2f26f658bdda4a5402ad112193112207752f3721a85c: Status 404 returned error can't find the container with id f154f14599ccf059fc6c2f26f658bdda4a5402ad112193112207752f3721a85c Mar 19 09:50:57.975814 master-0 kubenswrapper[27819]: I0319 09:50:57.969539 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:50:58.291371 master-0 kubenswrapper[27819]: I0319 09:50:58.240364 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-internal-api-0"] Mar 19 09:50:58.426768 master-0 kubenswrapper[27819]: I0319 09:50:58.426404 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-78c8dcbbcd-c8rst" Mar 19 09:50:58.494952 master-0 kubenswrapper[27819]: I0319 09:50:58.494903 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-521e3f00-8bcc-49ce-aca0-d5d3ae8f4e2e\" (UniqueName: \"kubernetes.io/csi/topolvm.io^05c45e43-c9ed-4ede-8b47-2bedef083e81\") pod \"glance-ae80b-default-external-api-0\" (UID: \"cce27e6f-3c8a-4763-9862-87419f46e912\") " pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:58.495352 master-0 kubenswrapper[27819]: I0319 09:50:58.495323 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:50:58.513249 master-0 kubenswrapper[27819]: I0319 09:50:58.513198 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:50:58.613667 master-0 kubenswrapper[27819]: I0319 09:50:58.611286 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-combined-ca-bundle\") pod \"9213d9a0-94b1-431b-8116-8fafc2a636cf\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " Mar 19 09:50:58.613667 master-0 kubenswrapper[27819]: I0319 09:50:58.611396 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-internal-tls-certs\") pod \"9213d9a0-94b1-431b-8116-8fafc2a636cf\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " Mar 19 09:50:58.613667 master-0 kubenswrapper[27819]: I0319 09:50:58.611429 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-public-tls-certs\") pod \"9213d9a0-94b1-431b-8116-8fafc2a636cf\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " Mar 19 09:50:58.613667 master-0 kubenswrapper[27819]: I0319 09:50:58.611481 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w277l\" (UniqueName: \"kubernetes.io/projected/9213d9a0-94b1-431b-8116-8fafc2a636cf-kube-api-access-w277l\") pod \"9213d9a0-94b1-431b-8116-8fafc2a636cf\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " Mar 19 09:50:58.613667 master-0 kubenswrapper[27819]: I0319 09:50:58.611506 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-scripts\") pod \"9213d9a0-94b1-431b-8116-8fafc2a636cf\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " Mar 19 09:50:58.613667 master-0 kubenswrapper[27819]: I0319 09:50:58.611632 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-config-data\") pod \"9213d9a0-94b1-431b-8116-8fafc2a636cf\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " Mar 19 09:50:58.613667 master-0 kubenswrapper[27819]: I0319 09:50:58.611663 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9213d9a0-94b1-431b-8116-8fafc2a636cf-logs\") pod \"9213d9a0-94b1-431b-8116-8fafc2a636cf\" (UID: \"9213d9a0-94b1-431b-8116-8fafc2a636cf\") " Mar 19 09:50:58.633695 master-0 kubenswrapper[27819]: I0319 09:50:58.632745 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9213d9a0-94b1-431b-8116-8fafc2a636cf-logs" (OuterVolumeSpecName: "logs") pod "9213d9a0-94b1-431b-8116-8fafc2a636cf" (UID: "9213d9a0-94b1-431b-8116-8fafc2a636cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:50:58.647605 master-0 kubenswrapper[27819]: I0319 09:50:58.645850 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-scripts" (OuterVolumeSpecName: "scripts") pod "9213d9a0-94b1-431b-8116-8fafc2a636cf" (UID: "9213d9a0-94b1-431b-8116-8fafc2a636cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:58.657284 master-0 kubenswrapper[27819]: I0319 09:50:58.657222 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9213d9a0-94b1-431b-8116-8fafc2a636cf-kube-api-access-w277l" (OuterVolumeSpecName: "kube-api-access-w277l") pod "9213d9a0-94b1-431b-8116-8fafc2a636cf" (UID: "9213d9a0-94b1-431b-8116-8fafc2a636cf"). InnerVolumeSpecName "kube-api-access-w277l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:50:58.661348 master-0 kubenswrapper[27819]: I0319 09:50:58.661293 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w277l\" (UniqueName: \"kubernetes.io/projected/9213d9a0-94b1-431b-8116-8fafc2a636cf-kube-api-access-w277l\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:58.661348 master-0 kubenswrapper[27819]: I0319 09:50:58.661342 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:58.662247 master-0 kubenswrapper[27819]: I0319 09:50:58.661354 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9213d9a0-94b1-431b-8116-8fafc2a636cf-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:58.754223 master-0 kubenswrapper[27819]: I0319 09:50:58.753609 27819 generic.go:334] "Generic (PLEG): container finished" podID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerID="3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3" exitCode=0 Mar 19 09:50:58.754223 master-0 kubenswrapper[27819]: I0319 09:50:58.753756 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" event={"ID":"492322b8-ccb0-4440-b0f4-8d43bbd889e0","Type":"ContainerDied","Data":"3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3"} Mar 19 09:50:58.754223 master-0 kubenswrapper[27819]: I0319 09:50:58.753787 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" event={"ID":"492322b8-ccb0-4440-b0f4-8d43bbd889e0","Type":"ContainerStarted","Data":"f154f14599ccf059fc6c2f26f658bdda4a5402ad112193112207752f3721a85c"} Mar 19 09:50:58.827171 master-0 kubenswrapper[27819]: I0319 09:50:58.827107 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6c02c97c-cf93-44c0-9399-43e7885bfd70","Type":"ContainerStarted","Data":"89b7f4424dd7ca42465ce76702b470998a66dda5bdd8a7a68f409dd40be74aa0"} Mar 19 09:50:58.827171 master-0 kubenswrapper[27819]: I0319 09:50:58.827170 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6c02c97c-cf93-44c0-9399-43e7885bfd70","Type":"ContainerStarted","Data":"fa5b3d9a71cfe9f6e093e530240ca8dfea9bb7b61119bbc3c3249e0949f354f9"} Mar 19 09:50:58.885858 master-0 kubenswrapper[27819]: I0319 09:50:58.885781 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"82c500ca-9291-43ed-9fa7-91debd8b6289","Type":"ContainerStarted","Data":"f5ea2afe4525b5de841ad763d155281e6a8ee02e7682b9f183162c9818394c49"} Mar 19 09:50:58.960065 master-0 kubenswrapper[27819]: I0319 09:50:58.959990 27819 generic.go:334] "Generic (PLEG): container finished" podID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerID="475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de" exitCode=0 Mar 19 09:50:58.960065 master-0 kubenswrapper[27819]: I0319 09:50:58.960055 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f4b5cb8b6-kmwr8" event={"ID":"9213d9a0-94b1-431b-8116-8fafc2a636cf","Type":"ContainerDied","Data":"475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de"} Mar 19 09:50:58.960317 master-0 kubenswrapper[27819]: I0319 09:50:58.960085 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5f4b5cb8b6-kmwr8" event={"ID":"9213d9a0-94b1-431b-8116-8fafc2a636cf","Type":"ContainerDied","Data":"5591e40d63764b02908dc07f6a0c881d65a4d3ed39ed32786684efc98a7d6884"} Mar 19 09:50:58.960317 master-0 kubenswrapper[27819]: I0319 09:50:58.960102 27819 scope.go:117] "RemoveContainer" containerID="475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de" Mar 19 09:50:58.960317 master-0 kubenswrapper[27819]: I0319 09:50:58.960274 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5f4b5cb8b6-kmwr8" Mar 19 09:50:58.977585 master-0 kubenswrapper[27819]: I0319 09:50:58.974624 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-config-data" (OuterVolumeSpecName: "config-data") pod "9213d9a0-94b1-431b-8116-8fafc2a636cf" (UID: "9213d9a0-94b1-431b-8116-8fafc2a636cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:59.021603 master-0 kubenswrapper[27819]: I0319 09:50:59.014323 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9213d9a0-94b1-431b-8116-8fafc2a636cf" (UID: "9213d9a0-94b1-431b-8116-8fafc2a636cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:59.021603 master-0 kubenswrapper[27819]: I0319 09:50:59.016937 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:59.021603 master-0 kubenswrapper[27819]: I0319 09:50:59.016977 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:59.078807 master-0 kubenswrapper[27819]: I0319 09:50:59.078701 27819 scope.go:117] "RemoveContainer" containerID="ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b" Mar 19 09:50:59.086109 master-0 kubenswrapper[27819]: I0319 09:50:59.086055 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9213d9a0-94b1-431b-8116-8fafc2a636cf" (UID: "9213d9a0-94b1-431b-8116-8fafc2a636cf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:59.121587 master-0 kubenswrapper[27819]: I0319 09:50:59.120715 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9213d9a0-94b1-431b-8116-8fafc2a636cf" (UID: "9213d9a0-94b1-431b-8116-8fafc2a636cf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:50:59.127737 master-0 kubenswrapper[27819]: I0319 09:50:59.125226 27819 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:59.127737 master-0 kubenswrapper[27819]: I0319 09:50:59.125264 27819 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9213d9a0-94b1-431b-8116-8fafc2a636cf-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:50:59.159980 master-0 kubenswrapper[27819]: I0319 09:50:59.159111 27819 scope.go:117] "RemoveContainer" containerID="475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de" Mar 19 09:50:59.159980 master-0 kubenswrapper[27819]: E0319 09:50:59.159660 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de\": container with ID starting with 475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de not found: ID does not exist" containerID="475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de" Mar 19 09:50:59.159980 master-0 kubenswrapper[27819]: I0319 09:50:59.159701 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de"} err="failed to get container status \"475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de\": rpc error: code = NotFound desc = could not find container \"475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de\": container with ID starting with 475937c87632c3d1761761d4bafe3f3eecb45868919548bbaf3a9a60dc4039de not found: ID does not exist" Mar 19 09:50:59.159980 master-0 kubenswrapper[27819]: I0319 09:50:59.159731 27819 scope.go:117] "RemoveContainer" containerID="ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b" Mar 19 09:50:59.160968 master-0 kubenswrapper[27819]: E0319 09:50:59.160234 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b\": container with ID starting with ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b not found: ID does not exist" containerID="ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b" Mar 19 09:50:59.160968 master-0 kubenswrapper[27819]: I0319 09:50:59.160297 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b"} err="failed to get container status \"ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b\": rpc error: code = NotFound desc = could not find container \"ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b\": container with ID starting with ddc359f96b8c7bfa59474d498cea5cb23d31c34e494d9e1e213f68e5c612e36b not found: ID does not exist" Mar 19 09:50:59.645986 master-0 kubenswrapper[27819]: I0319 09:50:59.645930 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5f4b5cb8b6-kmwr8"] Mar 19 09:50:59.698764 master-0 kubenswrapper[27819]: I0319 09:50:59.698712 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5f4b5cb8b6-kmwr8"] Mar 19 09:50:59.755896 master-0 kubenswrapper[27819]: I0319 09:50:59.753857 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ae80b-default-external-api-0"] Mar 19 09:51:00.023325 master-0 kubenswrapper[27819]: I0319 09:51:00.023188 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" event={"ID":"492322b8-ccb0-4440-b0f4-8d43bbd889e0","Type":"ContainerStarted","Data":"986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4"} Mar 19 09:51:00.025001 master-0 kubenswrapper[27819]: I0319 09:51:00.024896 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:51:00.038167 master-0 kubenswrapper[27819]: I0319 09:51:00.037393 27819 generic.go:334] "Generic (PLEG): container finished" podID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerID="89b7f4424dd7ca42465ce76702b470998a66dda5bdd8a7a68f409dd40be74aa0" exitCode=0 Mar 19 09:51:00.042761 master-0 kubenswrapper[27819]: I0319 09:51:00.038299 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6c02c97c-cf93-44c0-9399-43e7885bfd70","Type":"ContainerDied","Data":"89b7f4424dd7ca42465ce76702b470998a66dda5bdd8a7a68f409dd40be74aa0"} Mar 19 09:51:00.062201 master-0 kubenswrapper[27819]: I0319 09:51:00.057379 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"82c500ca-9291-43ed-9fa7-91debd8b6289","Type":"ContainerStarted","Data":"9f1e4e5329d9aedd873c3b94a99810b92777eaa8c98185100f6c2b41fce13c54"} Mar 19 09:51:00.062201 master-0 kubenswrapper[27819]: I0319 09:51:00.060836 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"cce27e6f-3c8a-4763-9862-87419f46e912","Type":"ContainerStarted","Data":"ae1f05ef22f6bef52dba710e0cc877b6745b5207c3fa6a74836f29bd5e732407"} Mar 19 09:51:00.064215 master-0 kubenswrapper[27819]: I0319 09:51:00.064131 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" podStartSLOduration=4.064101138 podStartE2EDuration="4.064101138s" podCreationTimestamp="2026-03-19 09:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:00.058015121 +0000 UTC m=+1044.979592813" watchObservedRunningTime="2026-03-19 09:51:00.064101138 +0000 UTC m=+1044.985678830" Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: I0319 09:51:00.691859 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cflgz"] Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: E0319 09:51:00.694807 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-api" Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: I0319 09:51:00.694829 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-api" Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: E0319 09:51:00.694901 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-log" Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: I0319 09:51:00.694908 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-log" Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: I0319 09:51:00.695139 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-api" Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: I0319 09:51:00.695162 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" containerName="placement-log" Mar 19 09:51:00.710275 master-0 kubenswrapper[27819]: I0319 09:51:00.696649 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.712448 master-0 kubenswrapper[27819]: I0319 09:51:00.712064 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 09:51:00.712448 master-0 kubenswrapper[27819]: I0319 09:51:00.712140 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 19 09:51:00.720926 master-0 kubenswrapper[27819]: I0319 09:51:00.720832 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cflgz"] Mar 19 09:51:00.825668 master-0 kubenswrapper[27819]: I0319 09:51:00.825544 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.825947 master-0 kubenswrapper[27819]: I0319 09:51:00.825929 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-scripts\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.826212 master-0 kubenswrapper[27819]: I0319 09:51:00.826196 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-config-data\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.826323 master-0 kubenswrapper[27819]: I0319 09:51:00.826309 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dqdz\" (UniqueName: \"kubernetes.io/projected/30d45107-4dfc-40e2-b122-216848830469-kube-api-access-8dqdz\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.929038 master-0 kubenswrapper[27819]: I0319 09:51:00.928965 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-config-data\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.932069 master-0 kubenswrapper[27819]: I0319 09:51:00.929294 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dqdz\" (UniqueName: \"kubernetes.io/projected/30d45107-4dfc-40e2-b122-216848830469-kube-api-access-8dqdz\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.932273 master-0 kubenswrapper[27819]: I0319 09:51:00.932253 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.932612 master-0 kubenswrapper[27819]: I0319 09:51:00.932544 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-scripts\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.938378 master-0 kubenswrapper[27819]: I0319 09:51:00.936304 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-scripts\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.942487 master-0 kubenswrapper[27819]: I0319 09:51:00.942420 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-config-data\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.946475 master-0 kubenswrapper[27819]: I0319 09:51:00.946211 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:00.954301 master-0 kubenswrapper[27819]: I0319 09:51:00.953994 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dqdz\" (UniqueName: \"kubernetes.io/projected/30d45107-4dfc-40e2-b122-216848830469-kube-api-access-8dqdz\") pod \"nova-cell0-conductor-db-sync-cflgz\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:01.084357 master-0 kubenswrapper[27819]: I0319 09:51:01.084215 27819 generic.go:334] "Generic (PLEG): container finished" podID="ca78928f-b0d4-4090-acba-66e98b7d312d" containerID="0ef54928f4871f2bcc441b2849549251c59405ee7f5d818158cfcd082f338837" exitCode=0 Mar 19 09:51:01.084357 master-0 kubenswrapper[27819]: I0319 09:51:01.084325 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerDied","Data":"0ef54928f4871f2bcc441b2849549251c59405ee7f5d818158cfcd082f338837"} Mar 19 09:51:01.091312 master-0 kubenswrapper[27819]: I0319 09:51:01.091261 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:01.096732 master-0 kubenswrapper[27819]: I0319 09:51:01.096006 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-internal-api-0" event={"ID":"82c500ca-9291-43ed-9fa7-91debd8b6289","Type":"ContainerStarted","Data":"ddb2ce0ebf0b53244de3306df012340bc762440c1f35f6903aa9377eddbe375c"} Mar 19 09:51:01.105936 master-0 kubenswrapper[27819]: I0319 09:51:01.105810 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"cce27e6f-3c8a-4763-9862-87419f46e912","Type":"ContainerStarted","Data":"f2220283510bf3425991370b2e7f30e31416f1b4e7602113ef2c64eacfa5e067"} Mar 19 09:51:01.204707 master-0 kubenswrapper[27819]: E0319 09:51:01.204324 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca78928f_b0d4_4090_acba_66e98b7d312d.slice/crio-conmon-0ef54928f4871f2bcc441b2849549251c59405ee7f5d818158cfcd082f338837.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca78928f_b0d4_4090_acba_66e98b7d312d.slice/crio-0ef54928f4871f2bcc441b2849549251c59405ee7f5d818158cfcd082f338837.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:51:01.225372 master-0 kubenswrapper[27819]: I0319 09:51:01.225285 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ae80b-default-internal-api-0" podStartSLOduration=6.225259757 podStartE2EDuration="6.225259757s" podCreationTimestamp="2026-03-19 09:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:01.186572805 +0000 UTC m=+1046.108150507" watchObservedRunningTime="2026-03-19 09:51:01.225259757 +0000 UTC m=+1046.146837459" Mar 19 09:51:01.309476 master-0 kubenswrapper[27819]: I0319 09:51:01.308086 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9213d9a0-94b1-431b-8116-8fafc2a636cf" path="/var/lib/kubelet/pods/9213d9a0-94b1-431b-8116-8fafc2a636cf/volumes" Mar 19 09:51:01.426392 master-0 kubenswrapper[27819]: I0319 09:51:01.424645 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:51:01.769426 master-0 kubenswrapper[27819]: I0319 09:51:01.769289 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cflgz"] Mar 19 09:51:02.119931 master-0 kubenswrapper[27819]: I0319 09:51:02.119024 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cflgz" event={"ID":"30d45107-4dfc-40e2-b122-216848830469","Type":"ContainerStarted","Data":"3fa1e4dd9dfb23570526d5831f48694f60b92d3e7bb18f0a7826f06c92d8bfe0"} Mar 19 09:51:02.122727 master-0 kubenswrapper[27819]: I0319 09:51:02.122654 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ae80b-default-external-api-0" event={"ID":"cce27e6f-3c8a-4763-9862-87419f46e912","Type":"ContainerStarted","Data":"32553987f1b2da7b238c0623bb266280abf810fdc94477e8961e978155d064c5"} Mar 19 09:51:02.163796 master-0 kubenswrapper[27819]: I0319 09:51:02.163657 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ae80b-default-external-api-0" podStartSLOduration=6.163631945 podStartE2EDuration="6.163631945s" podCreationTimestamp="2026-03-19 09:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:02.148316738 +0000 UTC m=+1047.069894440" watchObservedRunningTime="2026-03-19 09:51:02.163631945 +0000 UTC m=+1047.085209637" Mar 19 09:51:05.379363 master-0 kubenswrapper[27819]: I0319 09:51:05.379300 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j4z46" podUID="8d960bf6-0166-4e00-8ba8-ccc61b696d46" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:51:07.008218 master-0 kubenswrapper[27819]: I0319 09:51:07.008094 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:51:07.169865 master-0 kubenswrapper[27819]: I0319 09:51:07.169729 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756466c69-85qsr"] Mar 19 09:51:07.170758 master-0 kubenswrapper[27819]: I0319 09:51:07.170719 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-756466c69-85qsr" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="dnsmasq-dns" containerID="cri-o://7c29502f529c87d3109a93e4a17aad9d4b4c04492048661f8bf362c26499978a" gracePeriod=10 Mar 19 09:51:07.254817 master-0 kubenswrapper[27819]: I0319 09:51:07.254770 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:07.254817 master-0 kubenswrapper[27819]: I0319 09:51:07.254825 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:07.301773 master-0 kubenswrapper[27819]: I0319 09:51:07.301730 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:07.307509 master-0 kubenswrapper[27819]: I0319 09:51:07.307276 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:07.457444 master-0 kubenswrapper[27819]: I0319 09:51:07.457168 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerStarted","Data":"1bc58f48ed200e2419186d0d5777961a6dcc4366af614fc577fad090c15e016f"} Mar 19 09:51:07.461482 master-0 kubenswrapper[27819]: I0319 09:51:07.461442 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6c02c97c-cf93-44c0-9399-43e7885bfd70","Type":"ContainerStarted","Data":"e5681afa2c31ec72cc83963d866ebc4f6c2a48a69c660a7be62626137a89d944"} Mar 19 09:51:07.462002 master-0 kubenswrapper[27819]: I0319 09:51:07.461982 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:07.462559 master-0 kubenswrapper[27819]: I0319 09:51:07.462522 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:07.462559 master-0 kubenswrapper[27819]: I0319 09:51:07.462518 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerName="inspector-pxe-init" containerID="cri-o://e5681afa2c31ec72cc83963d866ebc4f6c2a48a69c660a7be62626137a89d944" gracePeriod=60 Mar 19 09:51:08.395135 master-0 kubenswrapper[27819]: I0319 09:51:08.395063 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-756466c69-85qsr" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.227:5353: connect: connection refused" Mar 19 09:51:08.514225 master-0 kubenswrapper[27819]: I0319 09:51:08.514150 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:08.514225 master-0 kubenswrapper[27819]: I0319 09:51:08.514233 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:08.572802 master-0 kubenswrapper[27819]: I0319 09:51:08.571041 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:08.577378 master-0 kubenswrapper[27819]: I0319 09:51:08.576747 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:09.491638 master-0 kubenswrapper[27819]: I0319 09:51:09.491580 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:09.492505 master-0 kubenswrapper[27819]: I0319 09:51:09.492203 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:13.398264 master-0 kubenswrapper[27819]: I0319 09:51:13.396743 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-756466c69-85qsr" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.227:5353: connect: connection refused" Mar 19 09:51:13.564211 master-0 kubenswrapper[27819]: I0319 09:51:13.563958 27819 generic.go:334] "Generic (PLEG): container finished" podID="7faee39a-b070-48c9-afed-6de955551889" containerID="7c29502f529c87d3109a93e4a17aad9d4b4c04492048661f8bf362c26499978a" exitCode=0 Mar 19 09:51:13.564211 master-0 kubenswrapper[27819]: I0319 09:51:13.564038 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756466c69-85qsr" event={"ID":"7faee39a-b070-48c9-afed-6de955551889","Type":"ContainerDied","Data":"7c29502f529c87d3109a93e4a17aad9d4b4c04492048661f8bf362c26499978a"} Mar 19 09:51:13.569371 master-0 kubenswrapper[27819]: I0319 09:51:13.569136 27819 generic.go:334] "Generic (PLEG): container finished" podID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerID="e5681afa2c31ec72cc83963d866ebc4f6c2a48a69c660a7be62626137a89d944" exitCode=0 Mar 19 09:51:13.569371 master-0 kubenswrapper[27819]: I0319 09:51:13.569322 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6c02c97c-cf93-44c0-9399-43e7885bfd70","Type":"ContainerDied","Data":"e5681afa2c31ec72cc83963d866ebc4f6c2a48a69c660a7be62626137a89d944"} Mar 19 09:51:14.218402 master-0 kubenswrapper[27819]: I0319 09:51:14.218341 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:51:14.322033 master-0 kubenswrapper[27819]: I0319 09:51:14.321970 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-nb\") pod \"7faee39a-b070-48c9-afed-6de955551889\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " Mar 19 09:51:14.322197 master-0 kubenswrapper[27819]: I0319 09:51:14.322043 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-svc\") pod \"7faee39a-b070-48c9-afed-6de955551889\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " Mar 19 09:51:14.322197 master-0 kubenswrapper[27819]: I0319 09:51:14.322125 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-config\") pod \"7faee39a-b070-48c9-afed-6de955551889\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " Mar 19 09:51:14.322272 master-0 kubenswrapper[27819]: I0319 09:51:14.322252 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-sb\") pod \"7faee39a-b070-48c9-afed-6de955551889\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " Mar 19 09:51:14.322333 master-0 kubenswrapper[27819]: I0319 09:51:14.322300 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-swift-storage-0\") pod \"7faee39a-b070-48c9-afed-6de955551889\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " Mar 19 09:51:14.322389 master-0 kubenswrapper[27819]: I0319 09:51:14.322371 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r2sq\" (UniqueName: \"kubernetes.io/projected/7faee39a-b070-48c9-afed-6de955551889-kube-api-access-4r2sq\") pod \"7faee39a-b070-48c9-afed-6de955551889\" (UID: \"7faee39a-b070-48c9-afed-6de955551889\") " Mar 19 09:51:14.327885 master-0 kubenswrapper[27819]: I0319 09:51:14.327581 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7faee39a-b070-48c9-afed-6de955551889-kube-api-access-4r2sq" (OuterVolumeSpecName: "kube-api-access-4r2sq") pod "7faee39a-b070-48c9-afed-6de955551889" (UID: "7faee39a-b070-48c9-afed-6de955551889"). InnerVolumeSpecName "kube-api-access-4r2sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:51:14.423061 master-0 kubenswrapper[27819]: I0319 09:51:14.423004 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-config" (OuterVolumeSpecName: "config") pod "7faee39a-b070-48c9-afed-6de955551889" (UID: "7faee39a-b070-48c9-afed-6de955551889"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:51:14.425661 master-0 kubenswrapper[27819]: I0319 09:51:14.425523 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r2sq\" (UniqueName: \"kubernetes.io/projected/7faee39a-b070-48c9-afed-6de955551889-kube-api-access-4r2sq\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.425661 master-0 kubenswrapper[27819]: I0319 09:51:14.425602 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.437059 master-0 kubenswrapper[27819]: I0319 09:51:14.436430 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7faee39a-b070-48c9-afed-6de955551889" (UID: "7faee39a-b070-48c9-afed-6de955551889"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:51:14.437059 master-0 kubenswrapper[27819]: I0319 09:51:14.436498 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7faee39a-b070-48c9-afed-6de955551889" (UID: "7faee39a-b070-48c9-afed-6de955551889"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:51:14.437059 master-0 kubenswrapper[27819]: I0319 09:51:14.436566 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7faee39a-b070-48c9-afed-6de955551889" (UID: "7faee39a-b070-48c9-afed-6de955551889"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:51:14.439656 master-0 kubenswrapper[27819]: I0319 09:51:14.438241 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7faee39a-b070-48c9-afed-6de955551889" (UID: "7faee39a-b070-48c9-afed-6de955551889"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:51:14.504478 master-0 kubenswrapper[27819]: I0319 09:51:14.504425 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 19 09:51:14.527455 master-0 kubenswrapper[27819]: I0319 09:51:14.527388 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.527455 master-0 kubenswrapper[27819]: I0319 09:51:14.527443 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.527455 master-0 kubenswrapper[27819]: I0319 09:51:14.527462 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.527728 master-0 kubenswrapper[27819]: I0319 09:51:14.527476 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7faee39a-b070-48c9-afed-6de955551889-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.593888 master-0 kubenswrapper[27819]: I0319 09:51:14.593844 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756466c69-85qsr" Mar 19 09:51:14.594121 master-0 kubenswrapper[27819]: I0319 09:51:14.593843 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756466c69-85qsr" event={"ID":"7faee39a-b070-48c9-afed-6de955551889","Type":"ContainerDied","Data":"4f82cbc7a006a5d4a25b5b2f831e6da45f4356f70821a501b44232ff62d25231"} Mar 19 09:51:14.594176 master-0 kubenswrapper[27819]: I0319 09:51:14.594160 27819 scope.go:117] "RemoveContainer" containerID="7c29502f529c87d3109a93e4a17aad9d4b4c04492048661f8bf362c26499978a" Mar 19 09:51:14.608051 master-0 kubenswrapper[27819]: I0319 09:51:14.607990 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"6c02c97c-cf93-44c0-9399-43e7885bfd70","Type":"ContainerDied","Data":"fa5b3d9a71cfe9f6e093e530240ca8dfea9bb7b61119bbc3c3249e0949f354f9"} Mar 19 09:51:14.608239 master-0 kubenswrapper[27819]: I0319 09:51:14.608119 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 19 09:51:14.614232 master-0 kubenswrapper[27819]: I0319 09:51:14.614188 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:14.614555 master-0 kubenswrapper[27819]: I0319 09:51:14.614508 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-internal-api-0" Mar 19 09:51:14.615055 master-0 kubenswrapper[27819]: I0319 09:51:14.614997 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cflgz" event={"ID":"30d45107-4dfc-40e2-b122-216848830469","Type":"ContainerStarted","Data":"76dc7f6b49e73e06dd7fbbd347b395f75e95df5f0d47ffe39d7fda64ae9d7e23"} Mar 19 09:51:14.625130 master-0 kubenswrapper[27819]: I0319 09:51:14.625077 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:14.625384 master-0 kubenswrapper[27819]: I0319 09:51:14.625245 27819 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 19 09:51:14.629421 master-0 kubenswrapper[27819]: I0319 09:51:14.629298 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-combined-ca-bundle\") pod \"6c02c97c-cf93-44c0-9399-43e7885bfd70\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " Mar 19 09:51:14.629421 master-0 kubenswrapper[27819]: I0319 09:51:14.629361 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"6c02c97c-cf93-44c0-9399-43e7885bfd70\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " Mar 19 09:51:14.630036 master-0 kubenswrapper[27819]: I0319 09:51:14.629556 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-scripts\") pod \"6c02c97c-cf93-44c0-9399-43e7885bfd70\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " Mar 19 09:51:14.630036 master-0 kubenswrapper[27819]: I0319 09:51:14.629641 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-config\") pod \"6c02c97c-cf93-44c0-9399-43e7885bfd70\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " Mar 19 09:51:14.630036 master-0 kubenswrapper[27819]: I0319 09:51:14.629680 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkpb5\" (UniqueName: \"kubernetes.io/projected/6c02c97c-cf93-44c0-9399-43e7885bfd70-kube-api-access-vkpb5\") pod \"6c02c97c-cf93-44c0-9399-43e7885bfd70\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " Mar 19 09:51:14.630036 master-0 kubenswrapper[27819]: I0319 09:51:14.629738 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic\") pod \"6c02c97c-cf93-44c0-9399-43e7885bfd70\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " Mar 19 09:51:14.630036 master-0 kubenswrapper[27819]: I0319 09:51:14.629765 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6c02c97c-cf93-44c0-9399-43e7885bfd70-etc-podinfo\") pod \"6c02c97c-cf93-44c0-9399-43e7885bfd70\" (UID: \"6c02c97c-cf93-44c0-9399-43e7885bfd70\") " Mar 19 09:51:14.631856 master-0 kubenswrapper[27819]: I0319 09:51:14.631742 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "6c02c97c-cf93-44c0-9399-43e7885bfd70" (UID: "6c02c97c-cf93-44c0-9399-43e7885bfd70"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:51:14.631944 master-0 kubenswrapper[27819]: I0319 09:51:14.631925 27819 scope.go:117] "RemoveContainer" containerID="71e547e34a16b8f7749b8445f6383128b46112b8d66ab6b461f4b68faaa61811" Mar 19 09:51:14.632622 master-0 kubenswrapper[27819]: I0319 09:51:14.632314 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "6c02c97c-cf93-44c0-9399-43e7885bfd70" (UID: "6c02c97c-cf93-44c0-9399-43e7885bfd70"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:51:14.634738 master-0 kubenswrapper[27819]: I0319 09:51:14.634690 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-config" (OuterVolumeSpecName: "config") pod "6c02c97c-cf93-44c0-9399-43e7885bfd70" (UID: "6c02c97c-cf93-44c0-9399-43e7885bfd70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:14.635017 master-0 kubenswrapper[27819]: I0319 09:51:14.634959 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-scripts" (OuterVolumeSpecName: "scripts") pod "6c02c97c-cf93-44c0-9399-43e7885bfd70" (UID: "6c02c97c-cf93-44c0-9399-43e7885bfd70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:14.635859 master-0 kubenswrapper[27819]: I0319 09:51:14.635818 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6c02c97c-cf93-44c0-9399-43e7885bfd70-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "6c02c97c-cf93-44c0-9399-43e7885bfd70" (UID: "6c02c97c-cf93-44c0-9399-43e7885bfd70"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 19 09:51:14.637481 master-0 kubenswrapper[27819]: I0319 09:51:14.637421 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c02c97c-cf93-44c0-9399-43e7885bfd70-kube-api-access-vkpb5" (OuterVolumeSpecName: "kube-api-access-vkpb5") pod "6c02c97c-cf93-44c0-9399-43e7885bfd70" (UID: "6c02c97c-cf93-44c0-9399-43e7885bfd70"). InnerVolumeSpecName "kube-api-access-vkpb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:51:14.637589 master-0 kubenswrapper[27819]: I0319 09:51:14.637532 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-ae80b-default-external-api-0" Mar 19 09:51:14.675711 master-0 kubenswrapper[27819]: I0319 09:51:14.674928 27819 scope.go:117] "RemoveContainer" containerID="e5681afa2c31ec72cc83963d866ebc4f6c2a48a69c660a7be62626137a89d944" Mar 19 09:51:14.730032 master-0 kubenswrapper[27819]: I0319 09:51:14.729978 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c02c97c-cf93-44c0-9399-43e7885bfd70" (UID: "6c02c97c-cf93-44c0-9399-43e7885bfd70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:14.734735 master-0 kubenswrapper[27819]: I0319 09:51:14.734624 27819 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.734735 master-0 kubenswrapper[27819]: I0319 09:51:14.734677 27819 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/6c02c97c-cf93-44c0-9399-43e7885bfd70-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.734735 master-0 kubenswrapper[27819]: I0319 09:51:14.734691 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.734735 master-0 kubenswrapper[27819]: I0319 09:51:14.734704 27819 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/6c02c97c-cf93-44c0-9399-43e7885bfd70-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.734735 master-0 kubenswrapper[27819]: I0319 09:51:14.734721 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.734735 master-0 kubenswrapper[27819]: I0319 09:51:14.734732 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c02c97c-cf93-44c0-9399-43e7885bfd70-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.735026 master-0 kubenswrapper[27819]: I0319 09:51:14.734744 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkpb5\" (UniqueName: \"kubernetes.io/projected/6c02c97c-cf93-44c0-9399-43e7885bfd70-kube-api-access-vkpb5\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:14.748194 master-0 kubenswrapper[27819]: I0319 09:51:14.744516 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-cflgz" podStartSLOduration=2.577333712 podStartE2EDuration="14.744494422s" podCreationTimestamp="2026-03-19 09:51:00 +0000 UTC" firstStartedPulling="2026-03-19 09:51:01.791461684 +0000 UTC m=+1046.713039376" lastFinishedPulling="2026-03-19 09:51:13.958622404 +0000 UTC m=+1058.880200086" observedRunningTime="2026-03-19 09:51:14.675518805 +0000 UTC m=+1059.597096507" watchObservedRunningTime="2026-03-19 09:51:14.744494422 +0000 UTC m=+1059.666072114" Mar 19 09:51:14.757081 master-0 kubenswrapper[27819]: I0319 09:51:14.753231 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756466c69-85qsr"] Mar 19 09:51:14.808706 master-0 kubenswrapper[27819]: I0319 09:51:14.796383 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-756466c69-85qsr"] Mar 19 09:51:14.820501 master-0 kubenswrapper[27819]: I0319 09:51:14.820361 27819 scope.go:117] "RemoveContainer" containerID="89b7f4424dd7ca42465ce76702b470998a66dda5bdd8a7a68f409dd40be74aa0" Mar 19 09:51:15.276581 master-0 kubenswrapper[27819]: I0319 09:51:15.276515 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:51:15.310288 master-0 kubenswrapper[27819]: I0319 09:51:15.310232 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7faee39a-b070-48c9-afed-6de955551889" path="/var/lib/kubelet/pods/7faee39a-b070-48c9-afed-6de955551889/volumes" Mar 19 09:51:15.580569 master-0 kubenswrapper[27819]: I0319 09:51:15.579709 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:51:15.756296 master-0 kubenswrapper[27819]: I0319 09:51:15.756242 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:51:15.757355 master-0 kubenswrapper[27819]: E0319 09:51:15.757303 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="init" Mar 19 09:51:15.757421 master-0 kubenswrapper[27819]: I0319 09:51:15.757356 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="init" Mar 19 09:51:15.757421 master-0 kubenswrapper[27819]: E0319 09:51:15.757381 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerName="ironic-python-agent-init" Mar 19 09:51:15.757421 master-0 kubenswrapper[27819]: I0319 09:51:15.757390 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerName="ironic-python-agent-init" Mar 19 09:51:15.757560 master-0 kubenswrapper[27819]: E0319 09:51:15.757462 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="dnsmasq-dns" Mar 19 09:51:15.757560 master-0 kubenswrapper[27819]: I0319 09:51:15.757475 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="dnsmasq-dns" Mar 19 09:51:15.757560 master-0 kubenswrapper[27819]: E0319 09:51:15.757521 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerName="inspector-pxe-init" Mar 19 09:51:15.757560 master-0 kubenswrapper[27819]: I0319 09:51:15.757532 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerName="inspector-pxe-init" Mar 19 09:51:15.757905 master-0 kubenswrapper[27819]: I0319 09:51:15.757880 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c02c97c-cf93-44c0-9399-43e7885bfd70" containerName="inspector-pxe-init" Mar 19 09:51:15.757949 master-0 kubenswrapper[27819]: I0319 09:51:15.757933 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="7faee39a-b070-48c9-afed-6de955551889" containerName="dnsmasq-dns" Mar 19 09:51:15.762126 master-0 kubenswrapper[27819]: I0319 09:51:15.761920 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 19 09:51:15.765703 master-0 kubenswrapper[27819]: I0319 09:51:15.765669 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 19 09:51:15.766241 master-0 kubenswrapper[27819]: I0319 09:51:15.765934 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 19 09:51:15.766241 master-0 kubenswrapper[27819]: I0319 09:51:15.766038 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 19 09:51:15.766241 master-0 kubenswrapper[27819]: I0319 09:51:15.766086 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 19 09:51:15.766353 master-0 kubenswrapper[27819]: I0319 09:51:15.766326 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 19 09:51:15.871079 master-0 kubenswrapper[27819]: I0319 09:51:15.870894 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:51:16.031317 master-0 kubenswrapper[27819]: I0319 09:51:16.031264 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.031529 master-0 kubenswrapper[27819]: I0319 09:51:16.031391 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.032831 master-0 kubenswrapper[27819]: I0319 09:51:16.031622 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz59\" (UniqueName: \"kubernetes.io/projected/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-kube-api-access-pvz59\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.032831 master-0 kubenswrapper[27819]: I0319 09:51:16.031827 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.032831 master-0 kubenswrapper[27819]: I0319 09:51:16.031859 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-config\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.032831 master-0 kubenswrapper[27819]: I0319 09:51:16.031923 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.032831 master-0 kubenswrapper[27819]: I0319 09:51:16.032120 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.032831 master-0 kubenswrapper[27819]: I0319 09:51:16.032449 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.032831 master-0 kubenswrapper[27819]: I0319 09:51:16.032530 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-scripts\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134445 master-0 kubenswrapper[27819]: I0319 09:51:16.134281 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvz59\" (UniqueName: \"kubernetes.io/projected/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-kube-api-access-pvz59\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134445 master-0 kubenswrapper[27819]: I0319 09:51:16.134395 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134445 master-0 kubenswrapper[27819]: I0319 09:51:16.134417 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-config\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134445 master-0 kubenswrapper[27819]: I0319 09:51:16.134441 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134894 master-0 kubenswrapper[27819]: I0319 09:51:16.134478 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134894 master-0 kubenswrapper[27819]: I0319 09:51:16.134579 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134894 master-0 kubenswrapper[27819]: I0319 09:51:16.134612 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-scripts\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134894 master-0 kubenswrapper[27819]: I0319 09:51:16.134674 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.134894 master-0 kubenswrapper[27819]: I0319 09:51:16.134689 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.135581 master-0 kubenswrapper[27819]: I0319 09:51:16.135521 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.135721 master-0 kubenswrapper[27819]: I0319 09:51:16.135680 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.138114 master-0 kubenswrapper[27819]: I0319 09:51:16.138069 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.139129 master-0 kubenswrapper[27819]: I0319 09:51:16.139091 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-scripts\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.150215 master-0 kubenswrapper[27819]: I0319 09:51:16.150150 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.151050 master-0 kubenswrapper[27819]: I0319 09:51:16.150956 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.151127 master-0 kubenswrapper[27819]: I0319 09:51:16.151017 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.151226 master-0 kubenswrapper[27819]: I0319 09:51:16.151185 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-config\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.195630 master-0 kubenswrapper[27819]: I0319 09:51:16.195572 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvz59\" (UniqueName: \"kubernetes.io/projected/d554c9a9-fcc8-489a-80b4-3d4fe58da11e-kube-api-access-pvz59\") pod \"ironic-inspector-0\" (UID: \"d554c9a9-fcc8-489a-80b4-3d4fe58da11e\") " pod="openstack/ironic-inspector-0" Mar 19 09:51:16.383239 master-0 kubenswrapper[27819]: I0319 09:51:16.382702 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 19 09:51:17.293148 master-0 kubenswrapper[27819]: I0319 09:51:17.293001 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c02c97c-cf93-44c0-9399-43e7885bfd70" path="/var/lib/kubelet/pods/6c02c97c-cf93-44c0-9399-43e7885bfd70/volumes" Mar 19 09:51:17.759065 master-0 kubenswrapper[27819]: W0319 09:51:17.759002 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd554c9a9_fcc8_489a_80b4_3d4fe58da11e.slice/crio-8634928cb0002bc6044f2674d4f6502080846b1c9872b5f2a26332beb613ba4a WatchSource:0}: Error finding container 8634928cb0002bc6044f2674d4f6502080846b1c9872b5f2a26332beb613ba4a: Status 404 returned error can't find the container with id 8634928cb0002bc6044f2674d4f6502080846b1c9872b5f2a26332beb613ba4a Mar 19 09:51:17.763685 master-0 kubenswrapper[27819]: I0319 09:51:17.763628 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 19 09:51:18.703577 master-0 kubenswrapper[27819]: I0319 09:51:18.702046 27819 generic.go:334] "Generic (PLEG): container finished" podID="d554c9a9-fcc8-489a-80b4-3d4fe58da11e" containerID="8b6ec4ebc1b7a95c648671ec0a454830834685d6022b45872d0cc2cda479edfe" exitCode=0 Mar 19 09:51:18.703577 master-0 kubenswrapper[27819]: I0319 09:51:18.702112 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerDied","Data":"8b6ec4ebc1b7a95c648671ec0a454830834685d6022b45872d0cc2cda479edfe"} Mar 19 09:51:18.703577 master-0 kubenswrapper[27819]: I0319 09:51:18.702144 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerStarted","Data":"8634928cb0002bc6044f2674d4f6502080846b1c9872b5f2a26332beb613ba4a"} Mar 19 09:51:19.719296 master-0 kubenswrapper[27819]: I0319 09:51:19.719232 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerStarted","Data":"2ab1d86f59b5d73727684e9fcb109282af3319ee0ea906ee161bd760ca13dd44"} Mar 19 09:51:20.730270 master-0 kubenswrapper[27819]: I0319 09:51:20.730219 27819 generic.go:334] "Generic (PLEG): container finished" podID="d554c9a9-fcc8-489a-80b4-3d4fe58da11e" containerID="2ab1d86f59b5d73727684e9fcb109282af3319ee0ea906ee161bd760ca13dd44" exitCode=0 Mar 19 09:51:20.730270 master-0 kubenswrapper[27819]: I0319 09:51:20.730273 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerDied","Data":"2ab1d86f59b5d73727684e9fcb109282af3319ee0ea906ee161bd760ca13dd44"} Mar 19 09:51:21.743720 master-0 kubenswrapper[27819]: I0319 09:51:21.743677 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerStarted","Data":"431b912dacd09673eb117f5ed0a6f61c6998bc5ddbddf058072f07761abe95b4"} Mar 19 09:51:22.760659 master-0 kubenswrapper[27819]: I0319 09:51:22.759605 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerStarted","Data":"b5c5eea89a64c7abf728d60625ffba705e415ee14fcf182dae5570fea0fcca64"} Mar 19 09:51:22.760659 master-0 kubenswrapper[27819]: I0319 09:51:22.759656 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerStarted","Data":"fc2273949f3bd627d3e1723668fe91baa77bafef2aa8b8bde55aa3370d03a55b"} Mar 19 09:51:23.774950 master-0 kubenswrapper[27819]: I0319 09:51:23.774886 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerStarted","Data":"20a8cc1e6cd0c4dd749176fb2e88539c8ee76c1bf146dd005b32ba77787e0545"} Mar 19 09:51:23.774950 master-0 kubenswrapper[27819]: I0319 09:51:23.774942 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"d554c9a9-fcc8-489a-80b4-3d4fe58da11e","Type":"ContainerStarted","Data":"e31ef748707b809c496e8d8cd768cffdd27980d58d5cc57fc64172a0d1f0e60c"} Mar 19 09:51:23.775482 master-0 kubenswrapper[27819]: I0319 09:51:23.775174 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 19 09:51:23.850036 master-0 kubenswrapper[27819]: I0319 09:51:23.849964 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-795d6cd54b-mpqdp" podUID="1e98d27a-2c1e-43cf-b1dd-5c2f9beaebf0" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.128.0.217:9696/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:51:23.992719 master-0 kubenswrapper[27819]: I0319 09:51:23.990487 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=8.99045174 podStartE2EDuration="8.99045174s" podCreationTimestamp="2026-03-19 09:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:23.967850035 +0000 UTC m=+1068.889427737" watchObservedRunningTime="2026-03-19 09:51:23.99045174 +0000 UTC m=+1068.912029472" Mar 19 09:51:24.783857 master-0 kubenswrapper[27819]: I0319 09:51:24.783779 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.383891 master-0 kubenswrapper[27819]: I0319 09:51:26.383806 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.383891 master-0 kubenswrapper[27819]: I0319 09:51:26.383873 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.384593 master-0 kubenswrapper[27819]: I0319 09:51:26.383924 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.384593 master-0 kubenswrapper[27819]: I0319 09:51:26.383981 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.432024 master-0 kubenswrapper[27819]: I0319 09:51:26.431954 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.436858 master-0 kubenswrapper[27819]: I0319 09:51:26.436804 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.811108 master-0 kubenswrapper[27819]: I0319 09:51:26.810990 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.813032 master-0 kubenswrapper[27819]: I0319 09:51:26.812995 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 19 09:51:26.858136 master-0 kubenswrapper[27819]: I0319 09:51:26.858088 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 19 09:51:27.815972 master-0 kubenswrapper[27819]: I0319 09:51:27.815760 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 19 09:51:30.846662 master-0 kubenswrapper[27819]: I0319 09:51:30.846090 27819 generic.go:334] "Generic (PLEG): container finished" podID="30d45107-4dfc-40e2-b122-216848830469" containerID="76dc7f6b49e73e06dd7fbbd347b395f75e95df5f0d47ffe39d7fda64ae9d7e23" exitCode=0 Mar 19 09:51:30.846662 master-0 kubenswrapper[27819]: I0319 09:51:30.846141 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cflgz" event={"ID":"30d45107-4dfc-40e2-b122-216848830469","Type":"ContainerDied","Data":"76dc7f6b49e73e06dd7fbbd347b395f75e95df5f0d47ffe39d7fda64ae9d7e23"} Mar 19 09:51:32.288693 master-0 kubenswrapper[27819]: I0319 09:51:32.288648 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:32.487106 master-0 kubenswrapper[27819]: I0319 09:51:32.486579 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dqdz\" (UniqueName: \"kubernetes.io/projected/30d45107-4dfc-40e2-b122-216848830469-kube-api-access-8dqdz\") pod \"30d45107-4dfc-40e2-b122-216848830469\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " Mar 19 09:51:32.487106 master-0 kubenswrapper[27819]: I0319 09:51:32.486661 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-config-data\") pod \"30d45107-4dfc-40e2-b122-216848830469\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " Mar 19 09:51:32.487106 master-0 kubenswrapper[27819]: I0319 09:51:32.486727 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-combined-ca-bundle\") pod \"30d45107-4dfc-40e2-b122-216848830469\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " Mar 19 09:51:32.487106 master-0 kubenswrapper[27819]: I0319 09:51:32.486759 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-scripts\") pod \"30d45107-4dfc-40e2-b122-216848830469\" (UID: \"30d45107-4dfc-40e2-b122-216848830469\") " Mar 19 09:51:32.490879 master-0 kubenswrapper[27819]: I0319 09:51:32.490571 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d45107-4dfc-40e2-b122-216848830469-kube-api-access-8dqdz" (OuterVolumeSpecName: "kube-api-access-8dqdz") pod "30d45107-4dfc-40e2-b122-216848830469" (UID: "30d45107-4dfc-40e2-b122-216848830469"). InnerVolumeSpecName "kube-api-access-8dqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:51:32.505378 master-0 kubenswrapper[27819]: I0319 09:51:32.505302 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-scripts" (OuterVolumeSpecName: "scripts") pod "30d45107-4dfc-40e2-b122-216848830469" (UID: "30d45107-4dfc-40e2-b122-216848830469"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:32.511884 master-0 kubenswrapper[27819]: I0319 09:51:32.511622 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-config-data" (OuterVolumeSpecName: "config-data") pod "30d45107-4dfc-40e2-b122-216848830469" (UID: "30d45107-4dfc-40e2-b122-216848830469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:32.516424 master-0 kubenswrapper[27819]: I0319 09:51:32.516359 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30d45107-4dfc-40e2-b122-216848830469" (UID: "30d45107-4dfc-40e2-b122-216848830469"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:32.589981 master-0 kubenswrapper[27819]: I0319 09:51:32.589918 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:32.589981 master-0 kubenswrapper[27819]: I0319 09:51:32.589969 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:32.589981 master-0 kubenswrapper[27819]: I0319 09:51:32.589984 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dqdz\" (UniqueName: \"kubernetes.io/projected/30d45107-4dfc-40e2-b122-216848830469-kube-api-access-8dqdz\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:32.590204 master-0 kubenswrapper[27819]: I0319 09:51:32.589995 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30d45107-4dfc-40e2-b122-216848830469-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:32.877155 master-0 kubenswrapper[27819]: I0319 09:51:32.877058 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-cflgz" event={"ID":"30d45107-4dfc-40e2-b122-216848830469","Type":"ContainerDied","Data":"3fa1e4dd9dfb23570526d5831f48694f60b92d3e7bb18f0a7826f06c92d8bfe0"} Mar 19 09:51:32.877155 master-0 kubenswrapper[27819]: I0319 09:51:32.877134 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fa1e4dd9dfb23570526d5831f48694f60b92d3e7bb18f0a7826f06c92d8bfe0" Mar 19 09:51:32.877155 master-0 kubenswrapper[27819]: I0319 09:51:32.877154 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-cflgz" Mar 19 09:51:33.024326 master-0 kubenswrapper[27819]: I0319 09:51:33.024247 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 09:51:33.027703 master-0 kubenswrapper[27819]: E0319 09:51:33.025947 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d45107-4dfc-40e2-b122-216848830469" containerName="nova-cell0-conductor-db-sync" Mar 19 09:51:33.027703 master-0 kubenswrapper[27819]: I0319 09:51:33.025980 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d45107-4dfc-40e2-b122-216848830469" containerName="nova-cell0-conductor-db-sync" Mar 19 09:51:33.027703 master-0 kubenswrapper[27819]: I0319 09:51:33.026318 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d45107-4dfc-40e2-b122-216848830469" containerName="nova-cell0-conductor-db-sync" Mar 19 09:51:33.027703 master-0 kubenswrapper[27819]: I0319 09:51:33.027305 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.033797 master-0 kubenswrapper[27819]: I0319 09:51:33.032957 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 19 09:51:33.035135 master-0 kubenswrapper[27819]: I0319 09:51:33.034640 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 09:51:33.104447 master-0 kubenswrapper[27819]: I0319 09:51:33.104361 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.104711 master-0 kubenswrapper[27819]: I0319 09:51:33.104491 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.104711 master-0 kubenswrapper[27819]: I0319 09:51:33.104660 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25zz6\" (UniqueName: \"kubernetes.io/projected/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-kube-api-access-25zz6\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.206651 master-0 kubenswrapper[27819]: I0319 09:51:33.206502 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.206864 master-0 kubenswrapper[27819]: I0319 09:51:33.206769 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.206982 master-0 kubenswrapper[27819]: I0319 09:51:33.206949 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25zz6\" (UniqueName: \"kubernetes.io/projected/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-kube-api-access-25zz6\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.219369 master-0 kubenswrapper[27819]: I0319 09:51:33.210788 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.220259 master-0 kubenswrapper[27819]: I0319 09:51:33.220229 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.222639 master-0 kubenswrapper[27819]: I0319 09:51:33.222602 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25zz6\" (UniqueName: \"kubernetes.io/projected/3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9-kube-api-access-25zz6\") pod \"nova-cell0-conductor-0\" (UID: \"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9\") " pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.377352 master-0 kubenswrapper[27819]: I0319 09:51:33.377214 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:33.923233 master-0 kubenswrapper[27819]: I0319 09:51:33.923183 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 19 09:51:34.905150 master-0 kubenswrapper[27819]: I0319 09:51:34.905087 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9","Type":"ContainerStarted","Data":"89cfc757a7b5ffab4fc20edf4653c0a0f0a7e48aed89d0178818a63a49401650"} Mar 19 09:51:34.905150 master-0 kubenswrapper[27819]: I0319 09:51:34.905152 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"3b936c4e-9e1b-4b56-ac58-7e6e9017c8f9","Type":"ContainerStarted","Data":"c37dface27d2dae9897fc77c4a42f61d4d2eac01e5c23d3c37d376d5c0e8ec3e"} Mar 19 09:51:34.905909 master-0 kubenswrapper[27819]: I0319 09:51:34.905884 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:34.925747 master-0 kubenswrapper[27819]: I0319 09:51:34.925646 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.925624967 podStartE2EDuration="2.925624967s" podCreationTimestamp="2026-03-19 09:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:34.92460355 +0000 UTC m=+1079.846181242" watchObservedRunningTime="2026-03-19 09:51:34.925624967 +0000 UTC m=+1079.847202659" Mar 19 09:51:43.406508 master-0 kubenswrapper[27819]: I0319 09:51:43.406440 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 19 09:51:43.937926 master-0 kubenswrapper[27819]: I0319 09:51:43.937843 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-v5k89"] Mar 19 09:51:43.939337 master-0 kubenswrapper[27819]: I0319 09:51:43.939311 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:43.957496 master-0 kubenswrapper[27819]: I0319 09:51:43.957396 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 19 09:51:43.992077 master-0 kubenswrapper[27819]: I0319 09:51:43.966520 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5k89"] Mar 19 09:51:43.992077 master-0 kubenswrapper[27819]: I0319 09:51:43.970346 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 19 09:51:44.056674 master-0 kubenswrapper[27819]: I0319 09:51:44.055321 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-scripts\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.056674 master-0 kubenswrapper[27819]: I0319 09:51:44.055701 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-config-data\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.056674 master-0 kubenswrapper[27819]: I0319 09:51:44.055994 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgv46\" (UniqueName: \"kubernetes.io/projected/87b8a71d-51a3-434e-92dd-4e7e341898f5-kube-api-access-wgv46\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.056674 master-0 kubenswrapper[27819]: I0319 09:51:44.056021 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.170311 master-0 kubenswrapper[27819]: I0319 09:51:44.169056 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-config-data\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.170311 master-0 kubenswrapper[27819]: I0319 09:51:44.169393 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgv46\" (UniqueName: \"kubernetes.io/projected/87b8a71d-51a3-434e-92dd-4e7e341898f5-kube-api-access-wgv46\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.170311 master-0 kubenswrapper[27819]: I0319 09:51:44.169428 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.170311 master-0 kubenswrapper[27819]: I0319 09:51:44.169577 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-scripts\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.173159 master-0 kubenswrapper[27819]: I0319 09:51:44.173137 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-scripts\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.175776 master-0 kubenswrapper[27819]: I0319 09:51:44.175465 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.183813 master-0 kubenswrapper[27819]: I0319 09:51:44.178470 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-config-data\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.187763 master-0 kubenswrapper[27819]: I0319 09:51:44.187687 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 19 09:51:44.189871 master-0 kubenswrapper[27819]: I0319 09:51:44.189733 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.207845 master-0 kubenswrapper[27819]: I0319 09:51:44.206233 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 19 09:51:44.212564 master-0 kubenswrapper[27819]: I0319 09:51:44.210909 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 19 09:51:44.229417 master-0 kubenswrapper[27819]: I0319 09:51:44.222334 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgv46\" (UniqueName: \"kubernetes.io/projected/87b8a71d-51a3-434e-92dd-4e7e341898f5-kube-api-access-wgv46\") pod \"nova-cell0-cell-mapping-v5k89\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.304313 master-0 kubenswrapper[27819]: I0319 09:51:44.303841 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:51:44.397836 master-0 kubenswrapper[27819]: I0319 09:51:44.397755 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:51:44.405875 master-0 kubenswrapper[27819]: I0319 09:51:44.400920 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.405875 master-0 kubenswrapper[27819]: I0319 09:51:44.405126 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 19 09:51:44.422260 master-0 kubenswrapper[27819]: I0319 09:51:44.422029 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4eefff-e24a-4688-9669-def923014c55-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.422798 master-0 kubenswrapper[27819]: I0319 09:51:44.422448 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glswl\" (UniqueName: \"kubernetes.io/projected/ec4eefff-e24a-4688-9669-def923014c55-kube-api-access-glswl\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.422798 master-0 kubenswrapper[27819]: I0319 09:51:44.422530 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4eefff-e24a-4688-9669-def923014c55-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.437570 master-0 kubenswrapper[27819]: I0319 09:51:44.432948 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 09:51:44.520571 master-0 kubenswrapper[27819]: I0319 09:51:44.490164 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527616 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-config-data\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527685 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527708 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmbhj\" (UniqueName: \"kubernetes.io/projected/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-kube-api-access-zmbhj\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527730 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527757 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glswl\" (UniqueName: \"kubernetes.io/projected/ec4eefff-e24a-4688-9669-def923014c55-kube-api-access-glswl\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527785 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4eefff-e24a-4688-9669-def923014c55-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527822 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-logs\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527882 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527914 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4eefff-e24a-4688-9669-def923014c55-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.531923 master-0 kubenswrapper[27819]: I0319 09:51:44.527945 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvhdq\" (UniqueName: \"kubernetes.io/projected/ca8a1d31-9568-44f1-8642-12df8d02b456-kube-api-access-vvhdq\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.546812 master-0 kubenswrapper[27819]: I0319 09:51:44.538519 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:51:44.559013 master-0 kubenswrapper[27819]: I0319 09:51:44.558948 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec4eefff-e24a-4688-9669-def923014c55-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.570620 master-0 kubenswrapper[27819]: I0319 09:51:44.566007 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 09:51:44.570620 master-0 kubenswrapper[27819]: I0319 09:51:44.566194 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:51:44.591606 master-0 kubenswrapper[27819]: I0319 09:51:44.579517 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec4eefff-e24a-4688-9669-def923014c55-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.591606 master-0 kubenswrapper[27819]: I0319 09:51:44.581284 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:51:44.591606 master-0 kubenswrapper[27819]: I0319 09:51:44.587807 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631197 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631327 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-logs\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631421 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631495 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvhdq\" (UniqueName: \"kubernetes.io/projected/ca8a1d31-9568-44f1-8642-12df8d02b456-kube-api-access-vvhdq\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631619 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-config-data\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631667 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631692 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmbhj\" (UniqueName: \"kubernetes.io/projected/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-kube-api-access-zmbhj\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.631924 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-logs\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.637095 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:51:44.645785 master-0 kubenswrapper[27819]: I0319 09:51:44.640347 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glswl\" (UniqueName: \"kubernetes.io/projected/ec4eefff-e24a-4688-9669-def923014c55-kube-api-access-glswl\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"ec4eefff-e24a-4688-9669-def923014c55\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.671593 master-0 kubenswrapper[27819]: I0319 09:51:44.648060 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.671593 master-0 kubenswrapper[27819]: I0319 09:51:44.648117 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.671593 master-0 kubenswrapper[27819]: I0319 09:51:44.648907 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-config-data\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.671593 master-0 kubenswrapper[27819]: I0319 09:51:44.660119 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.728680 master-0 kubenswrapper[27819]: I0319 09:51:44.724714 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:44.728680 master-0 kubenswrapper[27819]: I0319 09:51:44.726768 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:51:44.744657 master-0 kubenswrapper[27819]: I0319 09:51:44.733724 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-config-data\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.744657 master-0 kubenswrapper[27819]: I0319 09:51:44.733799 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.744657 master-0 kubenswrapper[27819]: I0319 09:51:44.733832 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6smv\" (UniqueName: \"kubernetes.io/projected/2bb8c2ff-550a-41ea-a571-d07825b400c4-kube-api-access-z6smv\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.744657 master-0 kubenswrapper[27819]: I0319 09:51:44.736026 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 09:51:44.746032 master-0 kubenswrapper[27819]: I0319 09:51:44.746005 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvhdq\" (UniqueName: \"kubernetes.io/projected/ca8a1d31-9568-44f1-8642-12df8d02b456-kube-api-access-vvhdq\") pod \"nova-cell1-novncproxy-0\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.773580 master-0 kubenswrapper[27819]: I0319 09:51:44.767846 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmbhj\" (UniqueName: \"kubernetes.io/projected/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-kube-api-access-zmbhj\") pod \"nova-api-0\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " pod="openstack/nova-api-0" Mar 19 09:51:44.800924 master-0 kubenswrapper[27819]: I0319 09:51:44.788273 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:51:44.836686 master-0 kubenswrapper[27819]: I0319 09:51:44.836345 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-config-data\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.836686 master-0 kubenswrapper[27819]: I0319 09:51:44.836454 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-config-data\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.836686 master-0 kubenswrapper[27819]: I0319 09:51:44.836482 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klxjb\" (UniqueName: \"kubernetes.io/projected/a58353b7-1381-414b-ba2f-d525066eda33-kube-api-access-klxjb\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.836931 master-0 kubenswrapper[27819]: I0319 09:51:44.836534 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.836931 master-0 kubenswrapper[27819]: I0319 09:51:44.836911 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6smv\" (UniqueName: \"kubernetes.io/projected/2bb8c2ff-550a-41ea-a571-d07825b400c4-kube-api-access-z6smv\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.839657 master-0 kubenswrapper[27819]: I0319 09:51:44.837038 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58353b7-1381-414b-ba2f-d525066eda33-logs\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.839657 master-0 kubenswrapper[27819]: I0319 09:51:44.837130 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.851662 master-0 kubenswrapper[27819]: I0319 09:51:44.850145 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-config-data\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.872572 master-0 kubenswrapper[27819]: I0319 09:51:44.858290 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:44.884568 master-0 kubenswrapper[27819]: I0319 09:51:44.876636 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:51:44.884568 master-0 kubenswrapper[27819]: I0319 09:51:44.883236 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:44.894000 master-0 kubenswrapper[27819]: I0319 09:51:44.891394 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6smv\" (UniqueName: \"kubernetes.io/projected/2bb8c2ff-550a-41ea-a571-d07825b400c4-kube-api-access-z6smv\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.932673 master-0 kubenswrapper[27819]: I0319 09:51:44.928229 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:51:44.939879 master-0 kubenswrapper[27819]: I0319 09:51:44.939843 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-config-data\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.940465 master-0 kubenswrapper[27819]: I0319 09:51:44.940447 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klxjb\" (UniqueName: \"kubernetes.io/projected/a58353b7-1381-414b-ba2f-d525066eda33-kube-api-access-klxjb\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.940730 master-0 kubenswrapper[27819]: I0319 09:51:44.940698 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58353b7-1381-414b-ba2f-d525066eda33-logs\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.940893 master-0 kubenswrapper[27819]: I0319 09:51:44.940844 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.943068 master-0 kubenswrapper[27819]: I0319 09:51:44.943026 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " pod="openstack/nova-scheduler-0" Mar 19 09:51:44.943334 master-0 kubenswrapper[27819]: I0319 09:51:44.943309 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58353b7-1381-414b-ba2f-d525066eda33-logs\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.948328 master-0 kubenswrapper[27819]: I0319 09:51:44.948283 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.950508 master-0 kubenswrapper[27819]: I0319 09:51:44.950366 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-config-data\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.962784 master-0 kubenswrapper[27819]: I0319 09:51:44.960994 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:51:44.975501 master-0 kubenswrapper[27819]: I0319 09:51:44.974897 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klxjb\" (UniqueName: \"kubernetes.io/projected/a58353b7-1381-414b-ba2f-d525066eda33-kube-api-access-klxjb\") pod \"nova-metadata-0\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " pod="openstack/nova-metadata-0" Mar 19 09:51:44.978400 master-0 kubenswrapper[27819]: I0319 09:51:44.977613 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58968b8785-ggchc"] Mar 19 09:51:44.979947 master-0 kubenswrapper[27819]: I0319 09:51:44.979850 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.015991 master-0 kubenswrapper[27819]: I0319 09:51:45.014832 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58968b8785-ggchc"] Mar 19 09:51:45.069354 master-0 kubenswrapper[27819]: I0319 09:51:45.069301 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:51:45.153693 master-0 kubenswrapper[27819]: I0319 09:51:45.150719 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-svc\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.153693 master-0 kubenswrapper[27819]: I0319 09:51:45.150781 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jd8c\" (UniqueName: \"kubernetes.io/projected/a2a19f02-b040-4ac7-ba5e-40aab5169420-kube-api-access-9jd8c\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.153693 master-0 kubenswrapper[27819]: I0319 09:51:45.150809 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-swift-storage-0\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.153693 master-0 kubenswrapper[27819]: I0319 09:51:45.150843 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-nb\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.153693 master-0 kubenswrapper[27819]: I0319 09:51:45.150945 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-config\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.153693 master-0 kubenswrapper[27819]: I0319 09:51:45.151063 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-sb\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.253983 master-0 kubenswrapper[27819]: I0319 09:51:45.252800 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-svc\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.253983 master-0 kubenswrapper[27819]: I0319 09:51:45.252861 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-swift-storage-0\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.253983 master-0 kubenswrapper[27819]: I0319 09:51:45.252884 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jd8c\" (UniqueName: \"kubernetes.io/projected/a2a19f02-b040-4ac7-ba5e-40aab5169420-kube-api-access-9jd8c\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.253983 master-0 kubenswrapper[27819]: I0319 09:51:45.252912 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-nb\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.253983 master-0 kubenswrapper[27819]: I0319 09:51:45.252978 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-config\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.253983 master-0 kubenswrapper[27819]: I0319 09:51:45.253054 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-sb\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.271012 master-0 kubenswrapper[27819]: I0319 09:51:45.262964 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-config\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.271012 master-0 kubenswrapper[27819]: I0319 09:51:45.263084 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-nb\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.272917 master-0 kubenswrapper[27819]: I0319 09:51:45.271353 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-svc\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.275164 master-0 kubenswrapper[27819]: I0319 09:51:45.273882 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-swift-storage-0\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.275164 master-0 kubenswrapper[27819]: I0319 09:51:45.274531 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-sb\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.308622 master-0 kubenswrapper[27819]: I0319 09:51:45.308476 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jd8c\" (UniqueName: \"kubernetes.io/projected/a2a19f02-b040-4ac7-ba5e-40aab5169420-kube-api-access-9jd8c\") pod \"dnsmasq-dns-58968b8785-ggchc\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.397177 master-0 kubenswrapper[27819]: I0319 09:51:45.397019 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5k89"] Mar 19 09:51:45.463910 master-0 kubenswrapper[27819]: W0319 09:51:45.458277 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87b8a71d_51a3_434e_92dd_4e7e341898f5.slice/crio-babd97c7df40ed3ec1e1f222ed7f4e6139fe121837d8472e5ff789f46217edca WatchSource:0}: Error finding container babd97c7df40ed3ec1e1f222ed7f4e6139fe121837d8472e5ff789f46217edca: Status 404 returned error can't find the container with id babd97c7df40ed3ec1e1f222ed7f4e6139fe121837d8472e5ff789f46217edca Mar 19 09:51:45.610570 master-0 kubenswrapper[27819]: I0319 09:51:45.601018 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:45.640353 master-0 kubenswrapper[27819]: I0319 09:51:45.640286 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 19 09:51:45.759690 master-0 kubenswrapper[27819]: I0319 09:51:45.759641 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fg4fw"] Mar 19 09:51:45.763702 master-0 kubenswrapper[27819]: I0319 09:51:45.763559 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:45.774630 master-0 kubenswrapper[27819]: I0319 09:51:45.773721 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 19 09:51:45.780561 master-0 kubenswrapper[27819]: I0319 09:51:45.776180 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 19 09:51:45.856991 master-0 kubenswrapper[27819]: I0319 09:51:45.856600 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fg4fw"] Mar 19 09:51:45.912572 master-0 kubenswrapper[27819]: I0319 09:51:45.901164 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-scripts\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:45.913770 master-0 kubenswrapper[27819]: I0319 09:51:45.913727 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-config-data\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:45.923616 master-0 kubenswrapper[27819]: I0319 09:51:45.913904 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:51:45.924413 master-0 kubenswrapper[27819]: I0319 09:51:45.924358 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:45.924538 master-0 kubenswrapper[27819]: I0319 09:51:45.924519 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqrt\" (UniqueName: \"kubernetes.io/projected/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-kube-api-access-9gqrt\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:45.934102 master-0 kubenswrapper[27819]: W0319 09:51:45.930071 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca8a1d31_9568_44f1_8642_12df8d02b456.slice/crio-1cfe40b806b46c0c89235cf75de691c359f87bf9201d354f2aa0ee69d55cf2c4 WatchSource:0}: Error finding container 1cfe40b806b46c0c89235cf75de691c359f87bf9201d354f2aa0ee69d55cf2c4: Status 404 returned error can't find the container with id 1cfe40b806b46c0c89235cf75de691c359f87bf9201d354f2aa0ee69d55cf2c4 Mar 19 09:51:45.934102 master-0 kubenswrapper[27819]: I0319 09:51:45.932809 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:51:45.985569 master-0 kubenswrapper[27819]: I0319 09:51:45.981681 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:51:46.028957 master-0 kubenswrapper[27819]: I0319 09:51:46.028803 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.029152 master-0 kubenswrapper[27819]: I0319 09:51:46.029016 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gqrt\" (UniqueName: \"kubernetes.io/projected/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-kube-api-access-9gqrt\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.038382 master-0 kubenswrapper[27819]: I0319 09:51:46.029762 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-scripts\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.038382 master-0 kubenswrapper[27819]: I0319 09:51:46.029817 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-config-data\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.038382 master-0 kubenswrapper[27819]: I0319 09:51:46.036509 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-scripts\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.059562 master-0 kubenswrapper[27819]: I0319 09:51:46.052021 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-config-data\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.059562 master-0 kubenswrapper[27819]: I0319 09:51:46.053268 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.069560 master-0 kubenswrapper[27819]: I0319 09:51:46.064832 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gqrt\" (UniqueName: \"kubernetes.io/projected/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-kube-api-access-9gqrt\") pod \"nova-cell1-conductor-db-sync-fg4fw\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.069560 master-0 kubenswrapper[27819]: I0319 09:51:46.067610 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc","Type":"ContainerStarted","Data":"85aa3173532a6e2c0be6cc77d3f1b07256142b57bcebf865dc213cb4b91e0dd2"} Mar 19 09:51:46.081587 master-0 kubenswrapper[27819]: I0319 09:51:46.078300 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8c2ff-550a-41ea-a571-d07825b400c4","Type":"ContainerStarted","Data":"31b01c027abcf45aadd8f95f3603db7f4e2b05b55f5c1c946d97d3c98f5bad58"} Mar 19 09:51:46.081587 master-0 kubenswrapper[27819]: I0319 09:51:46.080430 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"ec4eefff-e24a-4688-9669-def923014c55","Type":"ContainerStarted","Data":"aa870a7f3827192ec415d2c9683f771cccdc7859ab0107d173a607b896cc7abb"} Mar 19 09:51:46.081587 master-0 kubenswrapper[27819]: I0319 09:51:46.081335 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca8a1d31-9568-44f1-8642-12df8d02b456","Type":"ContainerStarted","Data":"1cfe40b806b46c0c89235cf75de691c359f87bf9201d354f2aa0ee69d55cf2c4"} Mar 19 09:51:46.084670 master-0 kubenswrapper[27819]: I0319 09:51:46.082777 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5k89" event={"ID":"87b8a71d-51a3-434e-92dd-4e7e341898f5","Type":"ContainerStarted","Data":"115d3a998584e2b265ad430c72cec96f2f7b79b5f2b8a01f091215a6bf166ca6"} Mar 19 09:51:46.084670 master-0 kubenswrapper[27819]: I0319 09:51:46.082828 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5k89" event={"ID":"87b8a71d-51a3-434e-92dd-4e7e341898f5","Type":"ContainerStarted","Data":"babd97c7df40ed3ec1e1f222ed7f4e6139fe121837d8472e5ff789f46217edca"} Mar 19 09:51:46.128599 master-0 kubenswrapper[27819]: I0319 09:51:46.127256 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:51:46.163136 master-0 kubenswrapper[27819]: I0319 09:51:46.144829 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-v5k89" podStartSLOduration=3.144806242 podStartE2EDuration="3.144806242s" podCreationTimestamp="2026-03-19 09:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:46.133594641 +0000 UTC m=+1091.055172333" watchObservedRunningTime="2026-03-19 09:51:46.144806242 +0000 UTC m=+1091.066383944" Mar 19 09:51:46.188249 master-0 kubenswrapper[27819]: I0319 09:51:46.187911 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:46.306960 master-0 kubenswrapper[27819]: I0319 09:51:46.306916 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58968b8785-ggchc"] Mar 19 09:51:46.745234 master-0 kubenswrapper[27819]: I0319 09:51:46.745166 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fg4fw"] Mar 19 09:51:47.130792 master-0 kubenswrapper[27819]: I0319 09:51:47.130709 27819 generic.go:334] "Generic (PLEG): container finished" podID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerID="ffd146b84924912685c7d1573a1e482c9ec50803ba50f5fe3e244e9dffaf32d5" exitCode=0 Mar 19 09:51:47.131039 master-0 kubenswrapper[27819]: I0319 09:51:47.130776 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58968b8785-ggchc" event={"ID":"a2a19f02-b040-4ac7-ba5e-40aab5169420","Type":"ContainerDied","Data":"ffd146b84924912685c7d1573a1e482c9ec50803ba50f5fe3e244e9dffaf32d5"} Mar 19 09:51:47.131039 master-0 kubenswrapper[27819]: I0319 09:51:47.130842 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58968b8785-ggchc" event={"ID":"a2a19f02-b040-4ac7-ba5e-40aab5169420","Type":"ContainerStarted","Data":"5c232ec83781016a7c11ce136bcdbfa7377350d16165013559daba4cc0d5ce29"} Mar 19 09:51:47.140716 master-0 kubenswrapper[27819]: I0319 09:51:47.133113 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58353b7-1381-414b-ba2f-d525066eda33","Type":"ContainerStarted","Data":"9b0dbd8e43848ecf70676dcb9d3920b868d8d5179922e2aa51d9d04ee5001102"} Mar 19 09:51:47.140716 master-0 kubenswrapper[27819]: I0319 09:51:47.139008 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" event={"ID":"63105ef4-b1a4-4f96-a846-e5a5f751b7b9","Type":"ContainerStarted","Data":"d8cbefbe12be873f82fe6252c1dd4dcb6055dd79cd52016c6c1b225b747b9ad2"} Mar 19 09:51:47.140716 master-0 kubenswrapper[27819]: I0319 09:51:47.139060 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" event={"ID":"63105ef4-b1a4-4f96-a846-e5a5f751b7b9","Type":"ContainerStarted","Data":"931f40c3e7d6dc9a46cfc233f26f63f4b6f16dfed94c79e7c36f2d5f0eb5ef78"} Mar 19 09:51:47.247660 master-0 kubenswrapper[27819]: I0319 09:51:47.245405 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" podStartSLOduration=2.24538235 podStartE2EDuration="2.24538235s" podCreationTimestamp="2026-03-19 09:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:47.202185242 +0000 UTC m=+1092.123762934" watchObservedRunningTime="2026-03-19 09:51:47.24538235 +0000 UTC m=+1092.166960042" Mar 19 09:51:48.167831 master-0 kubenswrapper[27819]: I0319 09:51:48.167672 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58968b8785-ggchc" event={"ID":"a2a19f02-b040-4ac7-ba5e-40aab5169420","Type":"ContainerStarted","Data":"e04af2a5f34d973f3b08b43c387cb86c88f8e64559d556dd858ae12d70c87536"} Mar 19 09:51:48.167831 master-0 kubenswrapper[27819]: I0319 09:51:48.167780 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:49.214904 master-0 kubenswrapper[27819]: I0319 09:51:49.211096 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58968b8785-ggchc" podStartSLOduration=5.211073741 podStartE2EDuration="5.211073741s" podCreationTimestamp="2026-03-19 09:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:51:48.210005479 +0000 UTC m=+1093.131583171" watchObservedRunningTime="2026-03-19 09:51:49.211073741 +0000 UTC m=+1094.132651433" Mar 19 09:51:49.218035 master-0 kubenswrapper[27819]: I0319 09:51:49.217970 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:51:49.229424 master-0 kubenswrapper[27819]: I0319 09:51:49.229345 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:50.237751 master-0 kubenswrapper[27819]: I0319 09:51:50.236494 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58353b7-1381-414b-ba2f-d525066eda33","Type":"ContainerStarted","Data":"9f97dae52de7345dc64e99cf085749af2d92fcfa2fcd84ca8c5b9666f4a4b09d"} Mar 19 09:51:50.250480 master-0 kubenswrapper[27819]: I0319 09:51:50.250051 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca8a1d31-9568-44f1-8642-12df8d02b456","Type":"ContainerStarted","Data":"85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7"} Mar 19 09:51:50.250480 master-0 kubenswrapper[27819]: I0319 09:51:50.250110 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ca8a1d31-9568-44f1-8642-12df8d02b456" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7" gracePeriod=30 Mar 19 09:51:50.252311 master-0 kubenswrapper[27819]: I0319 09:51:50.252233 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc","Type":"ContainerStarted","Data":"e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed"} Mar 19 09:51:50.261168 master-0 kubenswrapper[27819]: I0319 09:51:50.261108 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8c2ff-550a-41ea-a571-d07825b400c4","Type":"ContainerStarted","Data":"bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef"} Mar 19 09:51:50.340636 master-0 kubenswrapper[27819]: I0319 09:51:50.340555 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.536138579 podStartE2EDuration="6.340094897s" podCreationTimestamp="2026-03-19 09:51:44 +0000 UTC" firstStartedPulling="2026-03-19 09:51:45.945605571 +0000 UTC m=+1090.867183263" lastFinishedPulling="2026-03-19 09:51:49.749561889 +0000 UTC m=+1094.671139581" observedRunningTime="2026-03-19 09:51:50.287946727 +0000 UTC m=+1095.209524429" watchObservedRunningTime="2026-03-19 09:51:50.340094897 +0000 UTC m=+1095.261672589" Mar 19 09:51:50.373870 master-0 kubenswrapper[27819]: I0319 09:51:50.372042 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.622324462 podStartE2EDuration="6.372014664s" podCreationTimestamp="2026-03-19 09:51:44 +0000 UTC" firstStartedPulling="2026-03-19 09:51:45.999868267 +0000 UTC m=+1090.921445969" lastFinishedPulling="2026-03-19 09:51:49.749558479 +0000 UTC m=+1094.671136171" observedRunningTime="2026-03-19 09:51:50.313703994 +0000 UTC m=+1095.235281696" watchObservedRunningTime="2026-03-19 09:51:50.372014664 +0000 UTC m=+1095.293592356" Mar 19 09:51:51.273831 master-0 kubenswrapper[27819]: I0319 09:51:51.273771 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc","Type":"ContainerStarted","Data":"c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea"} Mar 19 09:51:51.281567 master-0 kubenswrapper[27819]: I0319 09:51:51.280217 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-log" containerID="cri-o://9f97dae52de7345dc64e99cf085749af2d92fcfa2fcd84ca8c5b9666f4a4b09d" gracePeriod=30 Mar 19 09:51:51.282148 master-0 kubenswrapper[27819]: I0319 09:51:51.282125 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-metadata" containerID="cri-o://c6fdaaca235cf5f137980f33723f842c9a8efaecfe201b01c535c8c9ccc0c876" gracePeriod=30 Mar 19 09:51:51.305562 master-0 kubenswrapper[27819]: I0319 09:51:51.304500 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58353b7-1381-414b-ba2f-d525066eda33","Type":"ContainerStarted","Data":"c6fdaaca235cf5f137980f33723f842c9a8efaecfe201b01c535c8c9ccc0c876"} Mar 19 09:51:51.840016 master-0 kubenswrapper[27819]: I0319 09:51:51.839840 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.046315868 podStartE2EDuration="7.839811566s" podCreationTimestamp="2026-03-19 09:51:44 +0000 UTC" firstStartedPulling="2026-03-19 09:51:45.956039621 +0000 UTC m=+1090.877617313" lastFinishedPulling="2026-03-19 09:51:49.749535319 +0000 UTC m=+1094.671113011" observedRunningTime="2026-03-19 09:51:51.828088072 +0000 UTC m=+1096.749665804" watchObservedRunningTime="2026-03-19 09:51:51.839811566 +0000 UTC m=+1096.761389258" Mar 19 09:51:52.072900 master-0 kubenswrapper[27819]: I0319 09:51:52.059662 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.507983199 podStartE2EDuration="8.059643611s" podCreationTimestamp="2026-03-19 09:51:44 +0000 UTC" firstStartedPulling="2026-03-19 09:51:46.200730391 +0000 UTC m=+1091.122308083" lastFinishedPulling="2026-03-19 09:51:49.752390803 +0000 UTC m=+1094.673968495" observedRunningTime="2026-03-19 09:51:52.055182355 +0000 UTC m=+1096.976760067" watchObservedRunningTime="2026-03-19 09:51:52.059643611 +0000 UTC m=+1096.981221303" Mar 19 09:51:52.348322 master-0 kubenswrapper[27819]: I0319 09:51:52.348259 27819 generic.go:334] "Generic (PLEG): container finished" podID="a58353b7-1381-414b-ba2f-d525066eda33" containerID="c6fdaaca235cf5f137980f33723f842c9a8efaecfe201b01c535c8c9ccc0c876" exitCode=0 Mar 19 09:51:52.348322 master-0 kubenswrapper[27819]: I0319 09:51:52.348300 27819 generic.go:334] "Generic (PLEG): container finished" podID="a58353b7-1381-414b-ba2f-d525066eda33" containerID="9f97dae52de7345dc64e99cf085749af2d92fcfa2fcd84ca8c5b9666f4a4b09d" exitCode=143 Mar 19 09:51:52.348841 master-0 kubenswrapper[27819]: I0319 09:51:52.348574 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58353b7-1381-414b-ba2f-d525066eda33","Type":"ContainerDied","Data":"c6fdaaca235cf5f137980f33723f842c9a8efaecfe201b01c535c8c9ccc0c876"} Mar 19 09:51:52.348841 master-0 kubenswrapper[27819]: I0319 09:51:52.348631 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58353b7-1381-414b-ba2f-d525066eda33","Type":"ContainerDied","Data":"9f97dae52de7345dc64e99cf085749af2d92fcfa2fcd84ca8c5b9666f4a4b09d"} Mar 19 09:51:52.490594 master-0 kubenswrapper[27819]: I0319 09:51:52.490451 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:51:52.625504 master-0 kubenswrapper[27819]: I0319 09:51:52.625452 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58353b7-1381-414b-ba2f-d525066eda33-logs\") pod \"a58353b7-1381-414b-ba2f-d525066eda33\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " Mar 19 09:51:52.626006 master-0 kubenswrapper[27819]: I0319 09:51:52.625989 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-config-data\") pod \"a58353b7-1381-414b-ba2f-d525066eda33\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " Mar 19 09:51:52.626171 master-0 kubenswrapper[27819]: I0319 09:51:52.626156 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klxjb\" (UniqueName: \"kubernetes.io/projected/a58353b7-1381-414b-ba2f-d525066eda33-kube-api-access-klxjb\") pod \"a58353b7-1381-414b-ba2f-d525066eda33\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " Mar 19 09:51:52.626289 master-0 kubenswrapper[27819]: I0319 09:51:52.626276 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-combined-ca-bundle\") pod \"a58353b7-1381-414b-ba2f-d525066eda33\" (UID: \"a58353b7-1381-414b-ba2f-d525066eda33\") " Mar 19 09:51:52.629018 master-0 kubenswrapper[27819]: I0319 09:51:52.628984 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58353b7-1381-414b-ba2f-d525066eda33-logs" (OuterVolumeSpecName: "logs") pod "a58353b7-1381-414b-ba2f-d525066eda33" (UID: "a58353b7-1381-414b-ba2f-d525066eda33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:51:52.630482 master-0 kubenswrapper[27819]: I0319 09:51:52.630462 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58353b7-1381-414b-ba2f-d525066eda33-kube-api-access-klxjb" (OuterVolumeSpecName: "kube-api-access-klxjb") pod "a58353b7-1381-414b-ba2f-d525066eda33" (UID: "a58353b7-1381-414b-ba2f-d525066eda33"). InnerVolumeSpecName "kube-api-access-klxjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:51:52.656947 master-0 kubenswrapper[27819]: I0319 09:51:52.656911 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a58353b7-1381-414b-ba2f-d525066eda33" (UID: "a58353b7-1381-414b-ba2f-d525066eda33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:52.658990 master-0 kubenswrapper[27819]: I0319 09:51:52.658963 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-config-data" (OuterVolumeSpecName: "config-data") pod "a58353b7-1381-414b-ba2f-d525066eda33" (UID: "a58353b7-1381-414b-ba2f-d525066eda33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:51:52.729626 master-0 kubenswrapper[27819]: I0319 09:51:52.729555 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:52.729626 master-0 kubenswrapper[27819]: I0319 09:51:52.729614 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klxjb\" (UniqueName: \"kubernetes.io/projected/a58353b7-1381-414b-ba2f-d525066eda33-kube-api-access-klxjb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:52.729626 master-0 kubenswrapper[27819]: I0319 09:51:52.729627 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58353b7-1381-414b-ba2f-d525066eda33-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:52.729626 master-0 kubenswrapper[27819]: I0319 09:51:52.729635 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58353b7-1381-414b-ba2f-d525066eda33-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:51:53.362502 master-0 kubenswrapper[27819]: I0319 09:51:53.362436 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a58353b7-1381-414b-ba2f-d525066eda33","Type":"ContainerDied","Data":"9b0dbd8e43848ecf70676dcb9d3920b868d8d5179922e2aa51d9d04ee5001102"} Mar 19 09:51:53.363092 master-0 kubenswrapper[27819]: I0319 09:51:53.362516 27819 scope.go:117] "RemoveContainer" containerID="c6fdaaca235cf5f137980f33723f842c9a8efaecfe201b01c535c8c9ccc0c876" Mar 19 09:51:53.363092 master-0 kubenswrapper[27819]: I0319 09:51:53.362591 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:51:53.402121 master-0 kubenswrapper[27819]: I0319 09:51:53.400923 27819 scope.go:117] "RemoveContainer" containerID="9f97dae52de7345dc64e99cf085749af2d92fcfa2fcd84ca8c5b9666f4a4b09d" Mar 19 09:51:53.402121 master-0 kubenswrapper[27819]: I0319 09:51:53.401136 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:53.426252 master-0 kubenswrapper[27819]: I0319 09:51:53.426211 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:53.456808 master-0 kubenswrapper[27819]: I0319 09:51:53.455786 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:53.456808 master-0 kubenswrapper[27819]: E0319 09:51:53.456528 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-metadata" Mar 19 09:51:53.456808 master-0 kubenswrapper[27819]: I0319 09:51:53.456563 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-metadata" Mar 19 09:51:53.456808 master-0 kubenswrapper[27819]: E0319 09:51:53.456649 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-log" Mar 19 09:51:53.456808 master-0 kubenswrapper[27819]: I0319 09:51:53.456659 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-log" Mar 19 09:51:53.457198 master-0 kubenswrapper[27819]: I0319 09:51:53.456963 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-metadata" Mar 19 09:51:53.457198 master-0 kubenswrapper[27819]: I0319 09:51:53.457007 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58353b7-1381-414b-ba2f-d525066eda33" containerName="nova-metadata-log" Mar 19 09:51:53.465754 master-0 kubenswrapper[27819]: I0319 09:51:53.458883 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:51:53.467197 master-0 kubenswrapper[27819]: I0319 09:51:53.467123 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:53.474298 master-0 kubenswrapper[27819]: I0319 09:51:53.472651 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 09:51:53.474298 master-0 kubenswrapper[27819]: I0319 09:51:53.472858 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 09:51:53.560167 master-0 kubenswrapper[27819]: I0319 09:51:53.560112 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-config-data\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.560421 master-0 kubenswrapper[27819]: I0319 09:51:53.560289 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.560656 master-0 kubenswrapper[27819]: I0319 09:51:53.560585 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk746\" (UniqueName: \"kubernetes.io/projected/cc7afa40-de74-4ee3-bf75-f88336d7207b-kube-api-access-wk746\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.560916 master-0 kubenswrapper[27819]: I0319 09:51:53.560878 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.560988 master-0 kubenswrapper[27819]: I0319 09:51:53.560919 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7afa40-de74-4ee3-bf75-f88336d7207b-logs\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.662897 master-0 kubenswrapper[27819]: I0319 09:51:53.662811 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.663105 master-0 kubenswrapper[27819]: I0319 09:51:53.662916 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk746\" (UniqueName: \"kubernetes.io/projected/cc7afa40-de74-4ee3-bf75-f88336d7207b-kube-api-access-wk746\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.663105 master-0 kubenswrapper[27819]: I0319 09:51:53.663088 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.663280 master-0 kubenswrapper[27819]: I0319 09:51:53.663205 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7afa40-de74-4ee3-bf75-f88336d7207b-logs\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.663337 master-0 kubenswrapper[27819]: I0319 09:51:53.663324 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-config-data\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.663970 master-0 kubenswrapper[27819]: I0319 09:51:53.663930 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7afa40-de74-4ee3-bf75-f88336d7207b-logs\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.666963 master-0 kubenswrapper[27819]: I0319 09:51:53.666921 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.667564 master-0 kubenswrapper[27819]: I0319 09:51:53.667505 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.668115 master-0 kubenswrapper[27819]: I0319 09:51:53.668085 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-config-data\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.686601 master-0 kubenswrapper[27819]: I0319 09:51:53.686535 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk746\" (UniqueName: \"kubernetes.io/projected/cc7afa40-de74-4ee3-bf75-f88336d7207b-kube-api-access-wk746\") pod \"nova-metadata-0\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " pod="openstack/nova-metadata-0" Mar 19 09:51:53.796591 master-0 kubenswrapper[27819]: I0319 09:51:53.796555 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:51:54.294462 master-0 kubenswrapper[27819]: I0319 09:51:54.294348 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:51:54.884984 master-0 kubenswrapper[27819]: I0319 09:51:54.884874 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:51:54.933958 master-0 kubenswrapper[27819]: I0319 09:51:54.933912 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 09:51:54.934643 master-0 kubenswrapper[27819]: I0319 09:51:54.934627 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 09:51:54.963227 master-0 kubenswrapper[27819]: I0319 09:51:54.963169 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 09:51:54.963227 master-0 kubenswrapper[27819]: I0319 09:51:54.963226 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 09:51:54.998498 master-0 kubenswrapper[27819]: I0319 09:51:54.998457 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 09:51:55.310627 master-0 kubenswrapper[27819]: I0319 09:51:55.302476 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58353b7-1381-414b-ba2f-d525066eda33" path="/var/lib/kubelet/pods/a58353b7-1381-414b-ba2f-d525066eda33/volumes" Mar 19 09:51:55.424922 master-0 kubenswrapper[27819]: I0319 09:51:55.424870 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 09:51:55.604564 master-0 kubenswrapper[27819]: I0319 09:51:55.603723 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:51:56.025101 master-0 kubenswrapper[27819]: I0319 09:51:56.025026 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:51:56.026013 master-0 kubenswrapper[27819]: I0319 09:51:56.025443 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:51:56.149235 master-0 kubenswrapper[27819]: I0319 09:51:56.149185 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5df5bc68f9-s4qw5"] Mar 19 09:51:56.149831 master-0 kubenswrapper[27819]: I0319 09:51:56.149803 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerName="dnsmasq-dns" containerID="cri-o://986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4" gracePeriod=10 Mar 19 09:51:57.007608 master-0 kubenswrapper[27819]: I0319 09:51:57.007475 27819 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.243:5353: connect: connection refused" Mar 19 09:52:00.926696 master-0 kubenswrapper[27819]: I0319 09:52:00.925922 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:52:01.004111 master-0 kubenswrapper[27819]: I0319 09:52:01.000726 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-sb\") pod \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " Mar 19 09:52:01.004111 master-0 kubenswrapper[27819]: I0319 09:52:01.000798 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4fbg\" (UniqueName: \"kubernetes.io/projected/492322b8-ccb0-4440-b0f4-8d43bbd889e0-kube-api-access-b4fbg\") pod \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " Mar 19 09:52:01.004111 master-0 kubenswrapper[27819]: I0319 09:52:01.000868 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-config\") pod \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " Mar 19 09:52:01.004111 master-0 kubenswrapper[27819]: I0319 09:52:01.000918 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-svc\") pod \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " Mar 19 09:52:01.004111 master-0 kubenswrapper[27819]: I0319 09:52:01.000979 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-nb\") pod \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " Mar 19 09:52:01.004111 master-0 kubenswrapper[27819]: I0319 09:52:01.001033 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-swift-storage-0\") pod \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\" (UID: \"492322b8-ccb0-4440-b0f4-8d43bbd889e0\") " Mar 19 09:52:01.014337 master-0 kubenswrapper[27819]: I0319 09:52:01.014266 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492322b8-ccb0-4440-b0f4-8d43bbd889e0-kube-api-access-b4fbg" (OuterVolumeSpecName: "kube-api-access-b4fbg") pod "492322b8-ccb0-4440-b0f4-8d43bbd889e0" (UID: "492322b8-ccb0-4440-b0f4-8d43bbd889e0"). InnerVolumeSpecName "kube-api-access-b4fbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:01.064056 master-0 kubenswrapper[27819]: I0319 09:52:01.063995 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "492322b8-ccb0-4440-b0f4-8d43bbd889e0" (UID: "492322b8-ccb0-4440-b0f4-8d43bbd889e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:01.070038 master-0 kubenswrapper[27819]: I0319 09:52:01.069986 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "492322b8-ccb0-4440-b0f4-8d43bbd889e0" (UID: "492322b8-ccb0-4440-b0f4-8d43bbd889e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:01.077385 master-0 kubenswrapper[27819]: I0319 09:52:01.077307 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "492322b8-ccb0-4440-b0f4-8d43bbd889e0" (UID: "492322b8-ccb0-4440-b0f4-8d43bbd889e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:01.083254 master-0 kubenswrapper[27819]: I0319 09:52:01.083204 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-config" (OuterVolumeSpecName: "config") pod "492322b8-ccb0-4440-b0f4-8d43bbd889e0" (UID: "492322b8-ccb0-4440-b0f4-8d43bbd889e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:01.084247 master-0 kubenswrapper[27819]: I0319 09:52:01.084219 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "492322b8-ccb0-4440-b0f4-8d43bbd889e0" (UID: "492322b8-ccb0-4440-b0f4-8d43bbd889e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:01.103556 master-0 kubenswrapper[27819]: I0319 09:52:01.103492 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:01.103556 master-0 kubenswrapper[27819]: I0319 09:52:01.103538 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:01.103556 master-0 kubenswrapper[27819]: I0319 09:52:01.103563 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4fbg\" (UniqueName: \"kubernetes.io/projected/492322b8-ccb0-4440-b0f4-8d43bbd889e0-kube-api-access-b4fbg\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:01.103886 master-0 kubenswrapper[27819]: I0319 09:52:01.103573 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:01.103886 master-0 kubenswrapper[27819]: I0319 09:52:01.103582 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:01.103886 master-0 kubenswrapper[27819]: I0319 09:52:01.103590 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/492322b8-ccb0-4440-b0f4-8d43bbd889e0-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:01.475581 master-0 kubenswrapper[27819]: I0319 09:52:01.474683 27819 generic.go:334] "Generic (PLEG): container finished" podID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerID="986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4" exitCode=0 Mar 19 09:52:01.475581 master-0 kubenswrapper[27819]: I0319 09:52:01.474804 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" event={"ID":"492322b8-ccb0-4440-b0f4-8d43bbd889e0","Type":"ContainerDied","Data":"986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4"} Mar 19 09:52:01.475581 master-0 kubenswrapper[27819]: I0319 09:52:01.474844 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" event={"ID":"492322b8-ccb0-4440-b0f4-8d43bbd889e0","Type":"ContainerDied","Data":"f154f14599ccf059fc6c2f26f658bdda4a5402ad112193112207752f3721a85c"} Mar 19 09:52:01.475581 master-0 kubenswrapper[27819]: I0319 09:52:01.474866 27819 scope.go:117] "RemoveContainer" containerID="986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4" Mar 19 09:52:01.475581 master-0 kubenswrapper[27819]: I0319 09:52:01.474994 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5df5bc68f9-s4qw5" Mar 19 09:52:01.479684 master-0 kubenswrapper[27819]: I0319 09:52:01.479598 27819 generic.go:334] "Generic (PLEG): container finished" podID="87b8a71d-51a3-434e-92dd-4e7e341898f5" containerID="115d3a998584e2b265ad430c72cec96f2f7b79b5f2b8a01f091215a6bf166ca6" exitCode=0 Mar 19 09:52:01.479684 master-0 kubenswrapper[27819]: I0319 09:52:01.479665 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5k89" event={"ID":"87b8a71d-51a3-434e-92dd-4e7e341898f5","Type":"ContainerDied","Data":"115d3a998584e2b265ad430c72cec96f2f7b79b5f2b8a01f091215a6bf166ca6"} Mar 19 09:52:01.486257 master-0 kubenswrapper[27819]: I0319 09:52:01.486203 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"ec4eefff-e24a-4688-9669-def923014c55","Type":"ContainerStarted","Data":"e3d706bc4b21c43558faeedd96157574bd293097cb94607b83e3400e4736c50d"} Mar 19 09:52:01.486670 master-0 kubenswrapper[27819]: I0319 09:52:01.486621 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:52:01.488861 master-0 kubenswrapper[27819]: I0319 09:52:01.488818 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc7afa40-de74-4ee3-bf75-f88336d7207b","Type":"ContainerStarted","Data":"b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644"} Mar 19 09:52:01.488861 master-0 kubenswrapper[27819]: I0319 09:52:01.488861 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc7afa40-de74-4ee3-bf75-f88336d7207b","Type":"ContainerStarted","Data":"f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22"} Mar 19 09:52:01.488987 master-0 kubenswrapper[27819]: I0319 09:52:01.488876 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc7afa40-de74-4ee3-bf75-f88336d7207b","Type":"ContainerStarted","Data":"f8dec171cfacad9528abefc4c2a514b4cc9a252102770c88a0a720860bf76178"} Mar 19 09:52:01.491682 master-0 kubenswrapper[27819]: I0319 09:52:01.490657 27819 generic.go:334] "Generic (PLEG): container finished" podID="63105ef4-b1a4-4f96-a846-e5a5f751b7b9" containerID="d8cbefbe12be873f82fe6252c1dd4dcb6055dd79cd52016c6c1b225b747b9ad2" exitCode=0 Mar 19 09:52:01.491682 master-0 kubenswrapper[27819]: I0319 09:52:01.490696 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" event={"ID":"63105ef4-b1a4-4f96-a846-e5a5f751b7b9","Type":"ContainerDied","Data":"d8cbefbe12be873f82fe6252c1dd4dcb6055dd79cd52016c6c1b225b747b9ad2"} Mar 19 09:52:01.500728 master-0 kubenswrapper[27819]: I0319 09:52:01.500678 27819 scope.go:117] "RemoveContainer" containerID="3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3" Mar 19 09:52:01.527717 master-0 kubenswrapper[27819]: I0319 09:52:01.527640 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 19 09:52:01.556835 master-0 kubenswrapper[27819]: I0319 09:52:01.556720 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5df5bc68f9-s4qw5"] Mar 19 09:52:01.575291 master-0 kubenswrapper[27819]: I0319 09:52:01.572866 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5df5bc68f9-s4qw5"] Mar 19 09:52:01.592576 master-0 kubenswrapper[27819]: I0319 09:52:01.591914 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.498218787 podStartE2EDuration="17.591890236s" podCreationTimestamp="2026-03-19 09:51:44 +0000 UTC" firstStartedPulling="2026-03-19 09:51:45.663390251 +0000 UTC m=+1090.584967953" lastFinishedPulling="2026-03-19 09:52:00.75706171 +0000 UTC m=+1105.678639402" observedRunningTime="2026-03-19 09:52:01.579285769 +0000 UTC m=+1106.500863471" watchObservedRunningTime="2026-03-19 09:52:01.591890236 +0000 UTC m=+1106.513467928" Mar 19 09:52:01.612195 master-0 kubenswrapper[27819]: I0319 09:52:01.612114 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=8.612094689 podStartE2EDuration="8.612094689s" podCreationTimestamp="2026-03-19 09:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:01.606444122 +0000 UTC m=+1106.528021814" watchObservedRunningTime="2026-03-19 09:52:01.612094689 +0000 UTC m=+1106.533672381" Mar 19 09:52:01.621375 master-0 kubenswrapper[27819]: I0319 09:52:01.621323 27819 scope.go:117] "RemoveContainer" containerID="986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4" Mar 19 09:52:01.624687 master-0 kubenswrapper[27819]: E0319 09:52:01.624641 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4\": container with ID starting with 986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4 not found: ID does not exist" containerID="986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4" Mar 19 09:52:01.624786 master-0 kubenswrapper[27819]: I0319 09:52:01.624701 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4"} err="failed to get container status \"986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4\": rpc error: code = NotFound desc = could not find container \"986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4\": container with ID starting with 986eb88801cdba8950c314d956dd773184e4df95cb4021ebee0b4364a1b779d4 not found: ID does not exist" Mar 19 09:52:01.624786 master-0 kubenswrapper[27819]: I0319 09:52:01.624731 27819 scope.go:117] "RemoveContainer" containerID="3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3" Mar 19 09:52:01.626835 master-0 kubenswrapper[27819]: E0319 09:52:01.626797 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3\": container with ID starting with 3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3 not found: ID does not exist" containerID="3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3" Mar 19 09:52:01.626954 master-0 kubenswrapper[27819]: I0319 09:52:01.626927 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3"} err="failed to get container status \"3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3\": rpc error: code = NotFound desc = could not find container \"3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3\": container with ID starting with 3bc063824c407d56e597074f6f3e699530de0bd9a3df684149e362466eedf5f3 not found: ID does not exist" Mar 19 09:52:02.942573 master-0 kubenswrapper[27819]: I0319 09:52:02.934672 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 09:52:02.947736 master-0 kubenswrapper[27819]: I0319 09:52:02.947692 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 09:52:03.143882 master-0 kubenswrapper[27819]: I0319 09:52:03.143823 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:52:03.150030 master-0 kubenswrapper[27819]: I0319 09:52:03.149406 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:52:03.263032 master-0 kubenswrapper[27819]: I0319 09:52:03.262982 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-scripts\") pod \"87b8a71d-51a3-434e-92dd-4e7e341898f5\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " Mar 19 09:52:03.263352 master-0 kubenswrapper[27819]: I0319 09:52:03.263124 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-config-data\") pod \"87b8a71d-51a3-434e-92dd-4e7e341898f5\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " Mar 19 09:52:03.263352 master-0 kubenswrapper[27819]: I0319 09:52:03.263271 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-config-data\") pod \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " Mar 19 09:52:03.263352 master-0 kubenswrapper[27819]: I0319 09:52:03.263307 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgv46\" (UniqueName: \"kubernetes.io/projected/87b8a71d-51a3-434e-92dd-4e7e341898f5-kube-api-access-wgv46\") pod \"87b8a71d-51a3-434e-92dd-4e7e341898f5\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " Mar 19 09:52:03.263352 master-0 kubenswrapper[27819]: I0319 09:52:03.263336 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gqrt\" (UniqueName: \"kubernetes.io/projected/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-kube-api-access-9gqrt\") pod \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " Mar 19 09:52:03.263720 master-0 kubenswrapper[27819]: I0319 09:52:03.263447 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-combined-ca-bundle\") pod \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " Mar 19 09:52:03.263720 master-0 kubenswrapper[27819]: I0319 09:52:03.263496 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-scripts\") pod \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\" (UID: \"63105ef4-b1a4-4f96-a846-e5a5f751b7b9\") " Mar 19 09:52:03.263720 master-0 kubenswrapper[27819]: I0319 09:52:03.263532 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle\") pod \"87b8a71d-51a3-434e-92dd-4e7e341898f5\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " Mar 19 09:52:03.268692 master-0 kubenswrapper[27819]: I0319 09:52:03.268387 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-scripts" (OuterVolumeSpecName: "scripts") pod "87b8a71d-51a3-434e-92dd-4e7e341898f5" (UID: "87b8a71d-51a3-434e-92dd-4e7e341898f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:03.268692 master-0 kubenswrapper[27819]: I0319 09:52:03.268487 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b8a71d-51a3-434e-92dd-4e7e341898f5-kube-api-access-wgv46" (OuterVolumeSpecName: "kube-api-access-wgv46") pod "87b8a71d-51a3-434e-92dd-4e7e341898f5" (UID: "87b8a71d-51a3-434e-92dd-4e7e341898f5"). InnerVolumeSpecName "kube-api-access-wgv46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:03.268978 master-0 kubenswrapper[27819]: I0319 09:52:03.268947 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-scripts" (OuterVolumeSpecName: "scripts") pod "63105ef4-b1a4-4f96-a846-e5a5f751b7b9" (UID: "63105ef4-b1a4-4f96-a846-e5a5f751b7b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:03.273862 master-0 kubenswrapper[27819]: I0319 09:52:03.273763 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-kube-api-access-9gqrt" (OuterVolumeSpecName: "kube-api-access-9gqrt") pod "63105ef4-b1a4-4f96-a846-e5a5f751b7b9" (UID: "63105ef4-b1a4-4f96-a846-e5a5f751b7b9"). InnerVolumeSpecName "kube-api-access-9gqrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:03.291014 master-0 kubenswrapper[27819]: I0319 09:52:03.290959 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63105ef4-b1a4-4f96-a846-e5a5f751b7b9" (UID: "63105ef4-b1a4-4f96-a846-e5a5f751b7b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:03.295967 master-0 kubenswrapper[27819]: I0319 09:52:03.295910 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-config-data" (OuterVolumeSpecName: "config-data") pod "63105ef4-b1a4-4f96-a846-e5a5f751b7b9" (UID: "63105ef4-b1a4-4f96-a846-e5a5f751b7b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:03.300644 master-0 kubenswrapper[27819]: I0319 09:52:03.300531 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" path="/var/lib/kubelet/pods/492322b8-ccb0-4440-b0f4-8d43bbd889e0/volumes" Mar 19 09:52:03.302783 master-0 kubenswrapper[27819]: I0319 09:52:03.302667 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-config-data" (OuterVolumeSpecName: "config-data") pod "87b8a71d-51a3-434e-92dd-4e7e341898f5" (UID: "87b8a71d-51a3-434e-92dd-4e7e341898f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:03.337949 master-0 kubenswrapper[27819]: E0319 09:52:03.337840 27819 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle podName:87b8a71d-51a3-434e-92dd-4e7e341898f5 nodeName:}" failed. No retries permitted until 2026-03-19 09:52:03.8003197 +0000 UTC m=+1108.721897392 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle") pod "87b8a71d-51a3-434e-92dd-4e7e341898f5" (UID: "87b8a71d-51a3-434e-92dd-4e7e341898f5") : error deleting /var/lib/kubelet/pods/87b8a71d-51a3-434e-92dd-4e7e341898f5/volume-subpaths: remove /var/lib/kubelet/pods/87b8a71d-51a3-434e-92dd-4e7e341898f5/volume-subpaths: no such file or directory Mar 19 09:52:03.367080 master-0 kubenswrapper[27819]: I0319 09:52:03.367009 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:03.367080 master-0 kubenswrapper[27819]: I0319 09:52:03.367059 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:03.367080 master-0 kubenswrapper[27819]: I0319 09:52:03.367070 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:03.367080 master-0 kubenswrapper[27819]: I0319 09:52:03.367083 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgv46\" (UniqueName: \"kubernetes.io/projected/87b8a71d-51a3-434e-92dd-4e7e341898f5-kube-api-access-wgv46\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:03.367080 master-0 kubenswrapper[27819]: I0319 09:52:03.367095 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gqrt\" (UniqueName: \"kubernetes.io/projected/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-kube-api-access-9gqrt\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:03.368080 master-0 kubenswrapper[27819]: I0319 09:52:03.367107 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:03.368080 master-0 kubenswrapper[27819]: I0319 09:52:03.367119 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63105ef4-b1a4-4f96-a846-e5a5f751b7b9-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:03.520656 master-0 kubenswrapper[27819]: I0319 09:52:03.520584 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-v5k89" event={"ID":"87b8a71d-51a3-434e-92dd-4e7e341898f5","Type":"ContainerDied","Data":"babd97c7df40ed3ec1e1f222ed7f4e6139fe121837d8472e5ff789f46217edca"} Mar 19 09:52:03.520656 master-0 kubenswrapper[27819]: I0319 09:52:03.520635 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="babd97c7df40ed3ec1e1f222ed7f4e6139fe121837d8472e5ff789f46217edca" Mar 19 09:52:03.520977 master-0 kubenswrapper[27819]: I0319 09:52:03.520660 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-v5k89" Mar 19 09:52:03.523207 master-0 kubenswrapper[27819]: I0319 09:52:03.523160 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" event={"ID":"63105ef4-b1a4-4f96-a846-e5a5f751b7b9","Type":"ContainerDied","Data":"931f40c3e7d6dc9a46cfc233f26f63f4b6f16dfed94c79e7c36f2d5f0eb5ef78"} Mar 19 09:52:03.523308 master-0 kubenswrapper[27819]: I0319 09:52:03.523219 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="931f40c3e7d6dc9a46cfc233f26f63f4b6f16dfed94c79e7c36f2d5f0eb5ef78" Mar 19 09:52:03.523367 master-0 kubenswrapper[27819]: I0319 09:52:03.523347 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fg4fw" Mar 19 09:52:03.668687 master-0 kubenswrapper[27819]: I0319 09:52:03.668496 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 09:52:03.669179 master-0 kubenswrapper[27819]: E0319 09:52:03.669149 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerName="init" Mar 19 09:52:03.669179 master-0 kubenswrapper[27819]: I0319 09:52:03.669171 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerName="init" Mar 19 09:52:03.669289 master-0 kubenswrapper[27819]: E0319 09:52:03.669200 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b8a71d-51a3-434e-92dd-4e7e341898f5" containerName="nova-manage" Mar 19 09:52:03.669289 master-0 kubenswrapper[27819]: I0319 09:52:03.669207 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b8a71d-51a3-434e-92dd-4e7e341898f5" containerName="nova-manage" Mar 19 09:52:03.669289 master-0 kubenswrapper[27819]: E0319 09:52:03.669219 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63105ef4-b1a4-4f96-a846-e5a5f751b7b9" containerName="nova-cell1-conductor-db-sync" Mar 19 09:52:03.669289 master-0 kubenswrapper[27819]: I0319 09:52:03.669227 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="63105ef4-b1a4-4f96-a846-e5a5f751b7b9" containerName="nova-cell1-conductor-db-sync" Mar 19 09:52:03.669289 master-0 kubenswrapper[27819]: E0319 09:52:03.669242 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerName="dnsmasq-dns" Mar 19 09:52:03.669289 master-0 kubenswrapper[27819]: I0319 09:52:03.669248 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerName="dnsmasq-dns" Mar 19 09:52:03.669528 master-0 kubenswrapper[27819]: I0319 09:52:03.669487 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="63105ef4-b1a4-4f96-a846-e5a5f751b7b9" containerName="nova-cell1-conductor-db-sync" Mar 19 09:52:03.669528 master-0 kubenswrapper[27819]: I0319 09:52:03.669505 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="492322b8-ccb0-4440-b0f4-8d43bbd889e0" containerName="dnsmasq-dns" Mar 19 09:52:03.669528 master-0 kubenswrapper[27819]: I0319 09:52:03.669526 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b8a71d-51a3-434e-92dd-4e7e341898f5" containerName="nova-manage" Mar 19 09:52:03.670331 master-0 kubenswrapper[27819]: I0319 09:52:03.670307 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.672280 master-0 kubenswrapper[27819]: I0319 09:52:03.672095 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 19 09:52:03.714570 master-0 kubenswrapper[27819]: I0319 09:52:03.685499 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 09:52:03.779768 master-0 kubenswrapper[27819]: I0319 09:52:03.779680 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwjg8\" (UniqueName: \"kubernetes.io/projected/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-kube-api-access-dwjg8\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.779979 master-0 kubenswrapper[27819]: I0319 09:52:03.779938 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.780093 master-0 kubenswrapper[27819]: I0319 09:52:03.780063 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.797721 master-0 kubenswrapper[27819]: I0319 09:52:03.797655 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 09:52:03.798792 master-0 kubenswrapper[27819]: I0319 09:52:03.798758 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 09:52:03.842310 master-0 kubenswrapper[27819]: I0319 09:52:03.842230 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:03.888802 master-0 kubenswrapper[27819]: I0319 09:52:03.888744 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle\") pod \"87b8a71d-51a3-434e-92dd-4e7e341898f5\" (UID: \"87b8a71d-51a3-434e-92dd-4e7e341898f5\") " Mar 19 09:52:03.889354 master-0 kubenswrapper[27819]: I0319 09:52:03.889333 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.889517 master-0 kubenswrapper[27819]: I0319 09:52:03.889498 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.889750 master-0 kubenswrapper[27819]: I0319 09:52:03.889735 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwjg8\" (UniqueName: \"kubernetes.io/projected/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-kube-api-access-dwjg8\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.893763 master-0 kubenswrapper[27819]: I0319 09:52:03.893715 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87b8a71d-51a3-434e-92dd-4e7e341898f5" (UID: "87b8a71d-51a3-434e-92dd-4e7e341898f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:03.899657 master-0 kubenswrapper[27819]: I0319 09:52:03.899578 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.899900 master-0 kubenswrapper[27819]: I0319 09:52:03.899873 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.906915 master-0 kubenswrapper[27819]: I0319 09:52:03.906429 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:03.906915 master-0 kubenswrapper[27819]: I0319 09:52:03.906660 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2bb8c2ff-550a-41ea-a571-d07825b400c4" containerName="nova-scheduler-scheduler" containerID="cri-o://bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" gracePeriod=30 Mar 19 09:52:03.928850 master-0 kubenswrapper[27819]: I0319 09:52:03.923948 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwjg8\" (UniqueName: \"kubernetes.io/projected/99366122-7aac-42b6-9c17-f2cb7cc6e4d0-kube-api-access-dwjg8\") pod \"nova-cell1-conductor-0\" (UID: \"99366122-7aac-42b6-9c17-f2cb7cc6e4d0\") " pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:03.928850 master-0 kubenswrapper[27819]: I0319 09:52:03.928196 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:03.991957 master-0 kubenswrapper[27819]: I0319 09:52:03.991796 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b8a71d-51a3-434e-92dd-4e7e341898f5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:04.046176 master-0 kubenswrapper[27819]: I0319 09:52:04.046093 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:04.538367 master-0 kubenswrapper[27819]: I0319 09:52:04.538052 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-log" containerID="cri-o://e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed" gracePeriod=30 Mar 19 09:52:04.538367 master-0 kubenswrapper[27819]: I0319 09:52:04.538130 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-api" containerID="cri-o://c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea" gracePeriod=30 Mar 19 09:52:04.682753 master-0 kubenswrapper[27819]: I0319 09:52:04.682642 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 19 09:52:04.830002 master-0 kubenswrapper[27819]: I0319 09:52:04.829923 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:04.830174 master-0 kubenswrapper[27819]: I0319 09:52:04.829981 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.1:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:04.963631 master-0 kubenswrapper[27819]: E0319 09:52:04.963560 27819 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 09:52:04.965042 master-0 kubenswrapper[27819]: E0319 09:52:04.965005 27819 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 09:52:04.966279 master-0 kubenswrapper[27819]: E0319 09:52:04.966235 27819 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 09:52:04.966348 master-0 kubenswrapper[27819]: E0319 09:52:04.966284 27819 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2bb8c2ff-550a-41ea-a571-d07825b400c4" containerName="nova-scheduler-scheduler" Mar 19 09:52:05.554940 master-0 kubenswrapper[27819]: I0319 09:52:05.554806 27819 generic.go:334] "Generic (PLEG): container finished" podID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerID="e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed" exitCode=143 Mar 19 09:52:05.554940 master-0 kubenswrapper[27819]: I0319 09:52:05.554884 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc","Type":"ContainerDied","Data":"e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed"} Mar 19 09:52:05.559245 master-0 kubenswrapper[27819]: I0319 09:52:05.559187 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"99366122-7aac-42b6-9c17-f2cb7cc6e4d0","Type":"ContainerStarted","Data":"4d4e0d754c95405932d3bcb8a051bfe87d2ec949430518e35967c17893807cba"} Mar 19 09:52:05.559331 master-0 kubenswrapper[27819]: I0319 09:52:05.559260 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"99366122-7aac-42b6-9c17-f2cb7cc6e4d0","Type":"ContainerStarted","Data":"a5c9dd0c8f6a03fdd2ed51848a291f073c296b623e1ecae392adbe74ae3797e7"} Mar 19 09:52:05.559331 master-0 kubenswrapper[27819]: I0319 09:52:05.559272 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-log" containerID="cri-o://f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22" gracePeriod=30 Mar 19 09:52:05.559437 master-0 kubenswrapper[27819]: I0319 09:52:05.559410 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-metadata" containerID="cri-o://b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644" gracePeriod=30 Mar 19 09:52:05.602430 master-0 kubenswrapper[27819]: I0319 09:52:05.602332 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.602310642 podStartE2EDuration="2.602310642s" podCreationTimestamp="2026-03-19 09:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:05.585612659 +0000 UTC m=+1110.507190351" watchObservedRunningTime="2026-03-19 09:52:05.602310642 +0000 UTC m=+1110.523888334" Mar 19 09:52:06.574826 master-0 kubenswrapper[27819]: I0319 09:52:06.574775 27819 generic.go:334] "Generic (PLEG): container finished" podID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerID="f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22" exitCode=143 Mar 19 09:52:06.575381 master-0 kubenswrapper[27819]: I0319 09:52:06.574848 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc7afa40-de74-4ee3-bf75-f88336d7207b","Type":"ContainerDied","Data":"f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22"} Mar 19 09:52:06.575381 master-0 kubenswrapper[27819]: I0319 09:52:06.574961 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:07.526273 master-0 kubenswrapper[27819]: I0319 09:52:07.525727 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:07.591224 master-0 kubenswrapper[27819]: I0319 09:52:07.591120 27819 generic.go:334] "Generic (PLEG): container finished" podID="2bb8c2ff-550a-41ea-a571-d07825b400c4" containerID="bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" exitCode=0 Mar 19 09:52:07.591859 master-0 kubenswrapper[27819]: I0319 09:52:07.591596 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:07.591859 master-0 kubenswrapper[27819]: I0319 09:52:07.591672 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8c2ff-550a-41ea-a571-d07825b400c4","Type":"ContainerDied","Data":"bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef"} Mar 19 09:52:07.591859 master-0 kubenswrapper[27819]: I0319 09:52:07.591702 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2bb8c2ff-550a-41ea-a571-d07825b400c4","Type":"ContainerDied","Data":"31b01c027abcf45aadd8f95f3603db7f4e2b05b55f5c1c946d97d3c98f5bad58"} Mar 19 09:52:07.591859 master-0 kubenswrapper[27819]: I0319 09:52:07.591724 27819 scope.go:117] "RemoveContainer" containerID="bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" Mar 19 09:52:07.649208 master-0 kubenswrapper[27819]: I0319 09:52:07.649177 27819 scope.go:117] "RemoveContainer" containerID="bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" Mar 19 09:52:07.649790 master-0 kubenswrapper[27819]: E0319 09:52:07.649712 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef\": container with ID starting with bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef not found: ID does not exist" containerID="bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef" Mar 19 09:52:07.649790 master-0 kubenswrapper[27819]: I0319 09:52:07.649742 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef"} err="failed to get container status \"bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef\": rpc error: code = NotFound desc = could not find container \"bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef\": container with ID starting with bcde8ecc60e3894f7db9483fbe8d439f9b3bcd9ac21c0ccc48575cab66b4c5ef not found: ID does not exist" Mar 19 09:52:07.705566 master-0 kubenswrapper[27819]: I0319 09:52:07.705417 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-config-data\") pod \"2bb8c2ff-550a-41ea-a571-d07825b400c4\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " Mar 19 09:52:07.706202 master-0 kubenswrapper[27819]: I0319 09:52:07.706176 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-combined-ca-bundle\") pod \"2bb8c2ff-550a-41ea-a571-d07825b400c4\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " Mar 19 09:52:07.706352 master-0 kubenswrapper[27819]: I0319 09:52:07.706334 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6smv\" (UniqueName: \"kubernetes.io/projected/2bb8c2ff-550a-41ea-a571-d07825b400c4-kube-api-access-z6smv\") pod \"2bb8c2ff-550a-41ea-a571-d07825b400c4\" (UID: \"2bb8c2ff-550a-41ea-a571-d07825b400c4\") " Mar 19 09:52:07.730256 master-0 kubenswrapper[27819]: I0319 09:52:07.730198 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb8c2ff-550a-41ea-a571-d07825b400c4-kube-api-access-z6smv" (OuterVolumeSpecName: "kube-api-access-z6smv") pod "2bb8c2ff-550a-41ea-a571-d07825b400c4" (UID: "2bb8c2ff-550a-41ea-a571-d07825b400c4"). InnerVolumeSpecName "kube-api-access-z6smv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:07.740426 master-0 kubenswrapper[27819]: I0319 09:52:07.740363 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bb8c2ff-550a-41ea-a571-d07825b400c4" (UID: "2bb8c2ff-550a-41ea-a571-d07825b400c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:07.742957 master-0 kubenswrapper[27819]: I0319 09:52:07.742898 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-config-data" (OuterVolumeSpecName: "config-data") pod "2bb8c2ff-550a-41ea-a571-d07825b400c4" (UID: "2bb8c2ff-550a-41ea-a571-d07825b400c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:07.808532 master-0 kubenswrapper[27819]: I0319 09:52:07.808457 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:07.808532 master-0 kubenswrapper[27819]: I0319 09:52:07.808514 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6smv\" (UniqueName: \"kubernetes.io/projected/2bb8c2ff-550a-41ea-a571-d07825b400c4-kube-api-access-z6smv\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:07.808532 master-0 kubenswrapper[27819]: I0319 09:52:07.808526 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb8c2ff-550a-41ea-a571-d07825b400c4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:08.022089 master-0 kubenswrapper[27819]: I0319 09:52:08.015064 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:08.028135 master-0 kubenswrapper[27819]: I0319 09:52:08.025750 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:08.056575 master-0 kubenswrapper[27819]: I0319 09:52:08.055826 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:08.056575 master-0 kubenswrapper[27819]: E0319 09:52:08.056335 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb8c2ff-550a-41ea-a571-d07825b400c4" containerName="nova-scheduler-scheduler" Mar 19 09:52:08.056575 master-0 kubenswrapper[27819]: I0319 09:52:08.056351 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb8c2ff-550a-41ea-a571-d07825b400c4" containerName="nova-scheduler-scheduler" Mar 19 09:52:08.056888 master-0 kubenswrapper[27819]: I0319 09:52:08.056635 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb8c2ff-550a-41ea-a571-d07825b400c4" containerName="nova-scheduler-scheduler" Mar 19 09:52:08.070107 master-0 kubenswrapper[27819]: I0319 09:52:08.057380 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:08.070107 master-0 kubenswrapper[27819]: I0319 09:52:08.062402 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 09:52:08.076449 master-0 kubenswrapper[27819]: I0319 09:52:08.076385 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:08.118102 master-0 kubenswrapper[27819]: I0319 09:52:08.118046 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.118175 master-0 kubenswrapper[27819]: I0319 09:52:08.118150 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxbvk\" (UniqueName: \"kubernetes.io/projected/8cb2ca57-2118-4862-a72c-3cb12baf7972-kube-api-access-kxbvk\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.118217 master-0 kubenswrapper[27819]: I0319 09:52:08.118196 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-config-data\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.207156 master-0 kubenswrapper[27819]: I0319 09:52:08.207086 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:08.223511 master-0 kubenswrapper[27819]: I0319 09:52:08.223441 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-config-data\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.231261 master-0 kubenswrapper[27819]: I0319 09:52:08.225305 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.231261 master-0 kubenswrapper[27819]: I0319 09:52:08.226148 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxbvk\" (UniqueName: \"kubernetes.io/projected/8cb2ca57-2118-4862-a72c-3cb12baf7972-kube-api-access-kxbvk\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.231261 master-0 kubenswrapper[27819]: I0319 09:52:08.227720 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-config-data\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.238169 master-0 kubenswrapper[27819]: I0319 09:52:08.237671 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.268839 master-0 kubenswrapper[27819]: I0319 09:52:08.267832 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxbvk\" (UniqueName: \"kubernetes.io/projected/8cb2ca57-2118-4862-a72c-3cb12baf7972-kube-api-access-kxbvk\") pod \"nova-scheduler-0\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:08.328068 master-0 kubenswrapper[27819]: I0319 09:52:08.327889 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-combined-ca-bundle\") pod \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " Mar 19 09:52:08.328281 master-0 kubenswrapper[27819]: I0319 09:52:08.328092 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-config-data\") pod \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " Mar 19 09:52:08.328338 master-0 kubenswrapper[27819]: I0319 09:52:08.328315 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-logs\") pod \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " Mar 19 09:52:08.328479 master-0 kubenswrapper[27819]: I0319 09:52:08.328446 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmbhj\" (UniqueName: \"kubernetes.io/projected/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-kube-api-access-zmbhj\") pod \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\" (UID: \"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc\") " Mar 19 09:52:08.329039 master-0 kubenswrapper[27819]: I0319 09:52:08.328989 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-logs" (OuterVolumeSpecName: "logs") pod "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" (UID: "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:52:08.331759 master-0 kubenswrapper[27819]: I0319 09:52:08.331651 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:08.334444 master-0 kubenswrapper[27819]: I0319 09:52:08.334389 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-kube-api-access-zmbhj" (OuterVolumeSpecName: "kube-api-access-zmbhj") pod "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" (UID: "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc"). InnerVolumeSpecName "kube-api-access-zmbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:08.361273 master-0 kubenswrapper[27819]: I0319 09:52:08.361148 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-config-data" (OuterVolumeSpecName: "config-data") pod "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" (UID: "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:08.372613 master-0 kubenswrapper[27819]: I0319 09:52:08.369930 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" (UID: "dd798dcf-bd1d-46c9-82be-c0feac3dc1dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:08.437917 master-0 kubenswrapper[27819]: I0319 09:52:08.437846 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:08.437917 master-0 kubenswrapper[27819]: I0319 09:52:08.437908 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmbhj\" (UniqueName: \"kubernetes.io/projected/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-kube-api-access-zmbhj\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:08.437917 master-0 kubenswrapper[27819]: I0319 09:52:08.437922 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:08.498615 master-0 kubenswrapper[27819]: I0319 09:52:08.498249 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:08.619753 master-0 kubenswrapper[27819]: I0319 09:52:08.619701 27819 generic.go:334] "Generic (PLEG): container finished" podID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerID="c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea" exitCode=0 Mar 19 09:52:08.620283 master-0 kubenswrapper[27819]: I0319 09:52:08.619775 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc","Type":"ContainerDied","Data":"c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea"} Mar 19 09:52:08.620283 master-0 kubenswrapper[27819]: I0319 09:52:08.619804 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dd798dcf-bd1d-46c9-82be-c0feac3dc1dc","Type":"ContainerDied","Data":"85aa3173532a6e2c0be6cc77d3f1b07256142b57bcebf865dc213cb4b91e0dd2"} Mar 19 09:52:08.620283 master-0 kubenswrapper[27819]: I0319 09:52:08.619820 27819 scope.go:117] "RemoveContainer" containerID="c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea" Mar 19 09:52:08.620283 master-0 kubenswrapper[27819]: I0319 09:52:08.619983 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:08.668184 master-0 kubenswrapper[27819]: I0319 09:52:08.668131 27819 scope.go:117] "RemoveContainer" containerID="e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed" Mar 19 09:52:08.738697 master-0 kubenswrapper[27819]: I0319 09:52:08.723479 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:08.738697 master-0 kubenswrapper[27819]: I0319 09:52:08.736174 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:08.740695 master-0 kubenswrapper[27819]: I0319 09:52:08.740197 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:08.740896 master-0 kubenswrapper[27819]: E0319 09:52:08.740826 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-log" Mar 19 09:52:08.740896 master-0 kubenswrapper[27819]: I0319 09:52:08.740848 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-log" Mar 19 09:52:08.740896 master-0 kubenswrapper[27819]: E0319 09:52:08.740873 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-api" Mar 19 09:52:08.740896 master-0 kubenswrapper[27819]: I0319 09:52:08.740883 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-api" Mar 19 09:52:08.741500 master-0 kubenswrapper[27819]: I0319 09:52:08.741145 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-api" Mar 19 09:52:08.741500 master-0 kubenswrapper[27819]: I0319 09:52:08.741180 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" containerName="nova-api-log" Mar 19 09:52:08.742669 master-0 kubenswrapper[27819]: I0319 09:52:08.742644 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:08.744018 master-0 kubenswrapper[27819]: I0319 09:52:08.743975 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.744332 master-0 kubenswrapper[27819]: I0319 09:52:08.744287 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-logs\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.744536 master-0 kubenswrapper[27819]: I0319 09:52:08.744512 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-config-data\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.744619 master-0 kubenswrapper[27819]: I0319 09:52:08.744558 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lx42\" (UniqueName: \"kubernetes.io/projected/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-kube-api-access-2lx42\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.749173 master-0 kubenswrapper[27819]: I0319 09:52:08.749069 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 09:52:08.751983 master-0 kubenswrapper[27819]: I0319 09:52:08.751943 27819 scope.go:117] "RemoveContainer" containerID="c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea" Mar 19 09:52:08.754879 master-0 kubenswrapper[27819]: E0319 09:52:08.754834 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea\": container with ID starting with c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea not found: ID does not exist" containerID="c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea" Mar 19 09:52:08.754954 master-0 kubenswrapper[27819]: I0319 09:52:08.754889 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea"} err="failed to get container status \"c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea\": rpc error: code = NotFound desc = could not find container \"c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea\": container with ID starting with c8dc77b2a7b055b4ed2c8dc4704fda1b16848a0bc15db6732bfa40885bf011ea not found: ID does not exist" Mar 19 09:52:08.754954 master-0 kubenswrapper[27819]: I0319 09:52:08.754922 27819 scope.go:117] "RemoveContainer" containerID="e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed" Mar 19 09:52:08.755690 master-0 kubenswrapper[27819]: E0319 09:52:08.755648 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed\": container with ID starting with e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed not found: ID does not exist" containerID="e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed" Mar 19 09:52:08.755856 master-0 kubenswrapper[27819]: I0319 09:52:08.755820 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed"} err="failed to get container status \"e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed\": rpc error: code = NotFound desc = could not find container \"e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed\": container with ID starting with e1e28138810fa0e9d51d200ca858dd7c3b6f93651b91c3541bbfbecd519cc9ed not found: ID does not exist" Mar 19 09:52:08.763651 master-0 kubenswrapper[27819]: I0319 09:52:08.763604 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:08.847061 master-0 kubenswrapper[27819]: I0319 09:52:08.846967 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lx42\" (UniqueName: \"kubernetes.io/projected/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-kube-api-access-2lx42\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.847061 master-0 kubenswrapper[27819]: I0319 09:52:08.847033 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-config-data\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.847599 master-0 kubenswrapper[27819]: I0319 09:52:08.847136 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.847599 master-0 kubenswrapper[27819]: I0319 09:52:08.847236 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-logs\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.847729 master-0 kubenswrapper[27819]: I0319 09:52:08.847713 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-logs\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.855885 master-0 kubenswrapper[27819]: I0319 09:52:08.855800 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-config-data\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.856174 master-0 kubenswrapper[27819]: I0319 09:52:08.855932 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:08.866240 master-0 kubenswrapper[27819]: I0319 09:52:08.866190 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lx42\" (UniqueName: \"kubernetes.io/projected/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-kube-api-access-2lx42\") pod \"nova-api-0\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " pod="openstack/nova-api-0" Mar 19 09:52:09.012580 master-0 kubenswrapper[27819]: I0319 09:52:09.012376 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:09.017342 master-0 kubenswrapper[27819]: W0319 09:52:09.013757 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cb2ca57_2118_4862_a72c_3cb12baf7972.slice/crio-0d26724e00f6b2d3150a98b6c6b094dbb7b4831cc59fbdf9037c7a099ad782dc WatchSource:0}: Error finding container 0d26724e00f6b2d3150a98b6c6b094dbb7b4831cc59fbdf9037c7a099ad782dc: Status 404 returned error can't find the container with id 0d26724e00f6b2d3150a98b6c6b094dbb7b4831cc59fbdf9037c7a099ad782dc Mar 19 09:52:09.066155 master-0 kubenswrapper[27819]: I0319 09:52:09.065983 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:09.297197 master-0 kubenswrapper[27819]: I0319 09:52:09.297139 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bb8c2ff-550a-41ea-a571-d07825b400c4" path="/var/lib/kubelet/pods/2bb8c2ff-550a-41ea-a571-d07825b400c4/volumes" Mar 19 09:52:09.297879 master-0 kubenswrapper[27819]: I0319 09:52:09.297848 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd798dcf-bd1d-46c9-82be-c0feac3dc1dc" path="/var/lib/kubelet/pods/dd798dcf-bd1d-46c9-82be-c0feac3dc1dc/volumes" Mar 19 09:52:09.550157 master-0 kubenswrapper[27819]: I0319 09:52:09.548917 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:09.647256 master-0 kubenswrapper[27819]: I0319 09:52:09.647199 27819 generic.go:334] "Generic (PLEG): container finished" podID="ca78928f-b0d4-4090-acba-66e98b7d312d" containerID="1bc58f48ed200e2419186d0d5777961a6dcc4366af614fc577fad090c15e016f" exitCode=0 Mar 19 09:52:09.647862 master-0 kubenswrapper[27819]: I0319 09:52:09.647255 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerDied","Data":"1bc58f48ed200e2419186d0d5777961a6dcc4366af614fc577fad090c15e016f"} Mar 19 09:52:09.653595 master-0 kubenswrapper[27819]: I0319 09:52:09.651310 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cb2ca57-2118-4862-a72c-3cb12baf7972","Type":"ContainerStarted","Data":"a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05"} Mar 19 09:52:09.653595 master-0 kubenswrapper[27819]: I0319 09:52:09.651374 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cb2ca57-2118-4862-a72c-3cb12baf7972","Type":"ContainerStarted","Data":"0d26724e00f6b2d3150a98b6c6b094dbb7b4831cc59fbdf9037c7a099ad782dc"} Mar 19 09:52:09.657841 master-0 kubenswrapper[27819]: I0319 09:52:09.657766 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"550c9a40-cbc7-4df8-aa14-ccc4e43c096d","Type":"ContainerStarted","Data":"5e778eb57b6246a94489e8c85beca7a5efa25be59d7ad726d664b07b635044cc"} Mar 19 09:52:09.711433 master-0 kubenswrapper[27819]: I0319 09:52:09.711328 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.711305692 podStartE2EDuration="2.711305692s" podCreationTimestamp="2026-03-19 09:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:09.698778808 +0000 UTC m=+1114.620356490" watchObservedRunningTime="2026-03-19 09:52:09.711305692 +0000 UTC m=+1114.632883384" Mar 19 09:52:10.552876 master-0 kubenswrapper[27819]: I0319 09:52:10.552829 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:10.677846 master-0 kubenswrapper[27819]: I0319 09:52:10.677784 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"550c9a40-cbc7-4df8-aa14-ccc4e43c096d","Type":"ContainerStarted","Data":"2dbd62e4b02e3ca11349164ed81ff9f7312fc1c2636e9694f8da5359f62e2af9"} Mar 19 09:52:10.677846 master-0 kubenswrapper[27819]: I0319 09:52:10.677844 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"550c9a40-cbc7-4df8-aa14-ccc4e43c096d","Type":"ContainerStarted","Data":"97b734b20cfb4ff1883fd921382a0c940df7a6d3b6dca630e4f8ef724ada9123"} Mar 19 09:52:10.682377 master-0 kubenswrapper[27819]: I0319 09:52:10.682335 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerStarted","Data":"4eb4ab1bc79e48e0cc8b88b2f642c7ae550a1abc47e55216236306c3fc3f7034"} Mar 19 09:52:10.687517 master-0 kubenswrapper[27819]: I0319 09:52:10.687466 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-nova-metadata-tls-certs\") pod \"cc7afa40-de74-4ee3-bf75-f88336d7207b\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " Mar 19 09:52:10.687882 master-0 kubenswrapper[27819]: I0319 09:52:10.687596 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk746\" (UniqueName: \"kubernetes.io/projected/cc7afa40-de74-4ee3-bf75-f88336d7207b-kube-api-access-wk746\") pod \"cc7afa40-de74-4ee3-bf75-f88336d7207b\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " Mar 19 09:52:10.687882 master-0 kubenswrapper[27819]: I0319 09:52:10.687814 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-config-data\") pod \"cc7afa40-de74-4ee3-bf75-f88336d7207b\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " Mar 19 09:52:10.687973 master-0 kubenswrapper[27819]: I0319 09:52:10.687960 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-combined-ca-bundle\") pod \"cc7afa40-de74-4ee3-bf75-f88336d7207b\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " Mar 19 09:52:10.688023 master-0 kubenswrapper[27819]: I0319 09:52:10.688002 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7afa40-de74-4ee3-bf75-f88336d7207b-logs\") pod \"cc7afa40-de74-4ee3-bf75-f88336d7207b\" (UID: \"cc7afa40-de74-4ee3-bf75-f88336d7207b\") " Mar 19 09:52:10.690593 master-0 kubenswrapper[27819]: I0319 09:52:10.690527 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc7afa40-de74-4ee3-bf75-f88336d7207b-logs" (OuterVolumeSpecName: "logs") pod "cc7afa40-de74-4ee3-bf75-f88336d7207b" (UID: "cc7afa40-de74-4ee3-bf75-f88336d7207b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:52:10.691393 master-0 kubenswrapper[27819]: I0319 09:52:10.691060 27819 generic.go:334] "Generic (PLEG): container finished" podID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerID="b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644" exitCode=0 Mar 19 09:52:10.691744 master-0 kubenswrapper[27819]: I0319 09:52:10.691698 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc7afa40-de74-4ee3-bf75-f88336d7207b","Type":"ContainerDied","Data":"b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644"} Mar 19 09:52:10.692015 master-0 kubenswrapper[27819]: I0319 09:52:10.691774 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc7afa40-de74-4ee3-bf75-f88336d7207b","Type":"ContainerDied","Data":"f8dec171cfacad9528abefc4c2a514b4cc9a252102770c88a0a720860bf76178"} Mar 19 09:52:10.692015 master-0 kubenswrapper[27819]: I0319 09:52:10.691801 27819 scope.go:117] "RemoveContainer" containerID="b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644" Mar 19 09:52:10.692015 master-0 kubenswrapper[27819]: I0319 09:52:10.691710 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:10.692613 master-0 kubenswrapper[27819]: I0319 09:52:10.692580 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7afa40-de74-4ee3-bf75-f88336d7207b-kube-api-access-wk746" (OuterVolumeSpecName: "kube-api-access-wk746") pod "cc7afa40-de74-4ee3-bf75-f88336d7207b" (UID: "cc7afa40-de74-4ee3-bf75-f88336d7207b"). InnerVolumeSpecName "kube-api-access-wk746". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:10.716125 master-0 kubenswrapper[27819]: I0319 09:52:10.716029 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-config-data" (OuterVolumeSpecName: "config-data") pod "cc7afa40-de74-4ee3-bf75-f88336d7207b" (UID: "cc7afa40-de74-4ee3-bf75-f88336d7207b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:10.721341 master-0 kubenswrapper[27819]: I0319 09:52:10.721295 27819 scope.go:117] "RemoveContainer" containerID="f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22" Mar 19 09:52:10.725402 master-0 kubenswrapper[27819]: I0319 09:52:10.725321 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc7afa40-de74-4ee3-bf75-f88336d7207b" (UID: "cc7afa40-de74-4ee3-bf75-f88336d7207b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:10.751671 master-0 kubenswrapper[27819]: I0319 09:52:10.750204 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cc7afa40-de74-4ee3-bf75-f88336d7207b" (UID: "cc7afa40-de74-4ee3-bf75-f88336d7207b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:10.755830 master-0 kubenswrapper[27819]: I0319 09:52:10.755775 27819 scope.go:117] "RemoveContainer" containerID="b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644" Mar 19 09:52:10.756286 master-0 kubenswrapper[27819]: E0319 09:52:10.756233 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644\": container with ID starting with b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644 not found: ID does not exist" containerID="b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644" Mar 19 09:52:10.756344 master-0 kubenswrapper[27819]: I0319 09:52:10.756284 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644"} err="failed to get container status \"b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644\": rpc error: code = NotFound desc = could not find container \"b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644\": container with ID starting with b369669af2cd6e480cb4316233d30382779afc81b729908e5eb49373b86d0644 not found: ID does not exist" Mar 19 09:52:10.756344 master-0 kubenswrapper[27819]: I0319 09:52:10.756313 27819 scope.go:117] "RemoveContainer" containerID="f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22" Mar 19 09:52:10.756688 master-0 kubenswrapper[27819]: E0319 09:52:10.756648 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22\": container with ID starting with f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22 not found: ID does not exist" containerID="f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22" Mar 19 09:52:10.756767 master-0 kubenswrapper[27819]: I0319 09:52:10.756684 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22"} err="failed to get container status \"f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22\": rpc error: code = NotFound desc = could not find container \"f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22\": container with ID starting with f5380ceac93b7260afe1774a64a8f820e8c760f8fad7cfef66f6c31d2cfb4a22 not found: ID does not exist" Mar 19 09:52:10.790411 master-0 kubenswrapper[27819]: I0319 09:52:10.790353 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:10.790411 master-0 kubenswrapper[27819]: I0319 09:52:10.790395 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc7afa40-de74-4ee3-bf75-f88336d7207b-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:10.790411 master-0 kubenswrapper[27819]: I0319 09:52:10.790405 27819 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:10.790411 master-0 kubenswrapper[27819]: I0319 09:52:10.790417 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk746\" (UniqueName: \"kubernetes.io/projected/cc7afa40-de74-4ee3-bf75-f88336d7207b-kube-api-access-wk746\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:10.790706 master-0 kubenswrapper[27819]: I0319 09:52:10.790426 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7afa40-de74-4ee3-bf75-f88336d7207b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:11.191613 master-0 kubenswrapper[27819]: I0319 09:52:11.191519 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.191480944 podStartE2EDuration="3.191480944s" podCreationTimestamp="2026-03-19 09:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:11.190419507 +0000 UTC m=+1116.111997219" watchObservedRunningTime="2026-03-19 09:52:11.191480944 +0000 UTC m=+1116.113058636" Mar 19 09:52:11.558785 master-0 kubenswrapper[27819]: I0319 09:52:11.556186 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:11.623571 master-0 kubenswrapper[27819]: I0319 09:52:11.623085 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:11.707812 master-0 kubenswrapper[27819]: I0319 09:52:11.707742 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerStarted","Data":"c3485dee1fb4bde4f05bea659951b3d78cca1f94d7e78197ee00c24aafbfc7fb"} Mar 19 09:52:11.707812 master-0 kubenswrapper[27819]: I0319 09:52:11.707823 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"ca78928f-b0d4-4090-acba-66e98b7d312d","Type":"ContainerStarted","Data":"6bb54773ad789f1c277043322bb21d671d59f73efeaba973b672ceffbc607c52"} Mar 19 09:52:12.050040 master-0 kubenswrapper[27819]: I0319 09:52:12.049975 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:12.050662 master-0 kubenswrapper[27819]: E0319 09:52:12.050635 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-log" Mar 19 09:52:12.050662 master-0 kubenswrapper[27819]: I0319 09:52:12.050658 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-log" Mar 19 09:52:12.050817 master-0 kubenswrapper[27819]: E0319 09:52:12.050672 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-metadata" Mar 19 09:52:12.050817 master-0 kubenswrapper[27819]: I0319 09:52:12.050680 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-metadata" Mar 19 09:52:12.053161 master-0 kubenswrapper[27819]: I0319 09:52:12.051007 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-metadata" Mar 19 09:52:12.053161 master-0 kubenswrapper[27819]: I0319 09:52:12.051047 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" containerName="nova-metadata-log" Mar 19 09:52:12.053161 master-0 kubenswrapper[27819]: I0319 09:52:12.052448 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:12.055275 master-0 kubenswrapper[27819]: I0319 09:52:12.055226 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 09:52:12.056009 master-0 kubenswrapper[27819]: I0319 09:52:12.055980 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 09:52:12.094876 master-0 kubenswrapper[27819]: I0319 09:52:12.094780 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:12.223761 master-0 kubenswrapper[27819]: I0319 09:52:12.223643 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjd79\" (UniqueName: \"kubernetes.io/projected/4ed41734-ebfb-4bf9-836e-43e82b05e510-kube-api-access-hjd79\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.223761 master-0 kubenswrapper[27819]: I0319 09:52:12.223706 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.223994 master-0 kubenswrapper[27819]: I0319 09:52:12.223818 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-config-data\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.223994 master-0 kubenswrapper[27819]: I0319 09:52:12.223917 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.223994 master-0 kubenswrapper[27819]: I0319 09:52:12.223949 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed41734-ebfb-4bf9-836e-43e82b05e510-logs\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.326366 master-0 kubenswrapper[27819]: I0319 09:52:12.326305 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed41734-ebfb-4bf9-836e-43e82b05e510-logs\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.326612 master-0 kubenswrapper[27819]: I0319 09:52:12.326412 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjd79\" (UniqueName: \"kubernetes.io/projected/4ed41734-ebfb-4bf9-836e-43e82b05e510-kube-api-access-hjd79\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.326612 master-0 kubenswrapper[27819]: I0319 09:52:12.326448 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.326719 master-0 kubenswrapper[27819]: I0319 09:52:12.326682 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-config-data\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.326827 master-0 kubenswrapper[27819]: I0319 09:52:12.326798 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.326883 master-0 kubenswrapper[27819]: I0319 09:52:12.326801 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed41734-ebfb-4bf9-836e-43e82b05e510-logs\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.333265 master-0 kubenswrapper[27819]: I0319 09:52:12.333209 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.340573 master-0 kubenswrapper[27819]: I0319 09:52:12.338152 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-config-data\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.353121 master-0 kubenswrapper[27819]: I0319 09:52:12.353054 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.355312 master-0 kubenswrapper[27819]: I0319 09:52:12.355269 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjd79\" (UniqueName: \"kubernetes.io/projected/4ed41734-ebfb-4bf9-836e-43e82b05e510-kube-api-access-hjd79\") pod \"nova-metadata-0\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " pod="openstack/nova-metadata-0" Mar 19 09:52:12.386516 master-0 kubenswrapper[27819]: I0319 09:52:12.386433 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:12.719311 master-0 kubenswrapper[27819]: I0319 09:52:12.719239 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 19 09:52:12.719311 master-0 kubenswrapper[27819]: I0319 09:52:12.719296 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 19 09:52:12.764002 master-0 kubenswrapper[27819]: I0319 09:52:12.763851 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=78.827787477 podStartE2EDuration="2m0.763830935s" podCreationTimestamp="2026-03-19 09:50:12 +0000 UTC" firstStartedPulling="2026-03-19 09:50:24.420568262 +0000 UTC m=+1009.342145954" lastFinishedPulling="2026-03-19 09:51:06.35661173 +0000 UTC m=+1051.278189412" observedRunningTime="2026-03-19 09:52:12.763741963 +0000 UTC m=+1117.685319665" watchObservedRunningTime="2026-03-19 09:52:12.763830935 +0000 UTC m=+1117.685408627" Mar 19 09:52:12.862975 master-0 kubenswrapper[27819]: W0319 09:52:12.862919 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ed41734_ebfb_4bf9_836e_43e82b05e510.slice/crio-45a71a35f8b31a5447be32927c80a9944dacc72dc17bd222f627015071d538a5 WatchSource:0}: Error finding container 45a71a35f8b31a5447be32927c80a9944dacc72dc17bd222f627015071d538a5: Status 404 returned error can't find the container with id 45a71a35f8b31a5447be32927c80a9944dacc72dc17bd222f627015071d538a5 Mar 19 09:52:12.869804 master-0 kubenswrapper[27819]: I0319 09:52:12.869736 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:13.299051 master-0 kubenswrapper[27819]: I0319 09:52:13.298987 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7afa40-de74-4ee3-bf75-f88336d7207b" path="/var/lib/kubelet/pods/cc7afa40-de74-4ee3-bf75-f88336d7207b/volumes" Mar 19 09:52:13.498477 master-0 kubenswrapper[27819]: I0319 09:52:13.498425 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 09:52:13.755528 master-0 kubenswrapper[27819]: I0319 09:52:13.755444 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed41734-ebfb-4bf9-836e-43e82b05e510","Type":"ContainerStarted","Data":"adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c"} Mar 19 09:52:13.755528 master-0 kubenswrapper[27819]: I0319 09:52:13.755520 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed41734-ebfb-4bf9-836e-43e82b05e510","Type":"ContainerStarted","Data":"0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e"} Mar 19 09:52:13.755528 master-0 kubenswrapper[27819]: I0319 09:52:13.755533 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed41734-ebfb-4bf9-836e-43e82b05e510","Type":"ContainerStarted","Data":"45a71a35f8b31a5447be32927c80a9944dacc72dc17bd222f627015071d538a5"} Mar 19 09:52:13.790003 master-0 kubenswrapper[27819]: I0319 09:52:13.789913 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.789894705 podStartE2EDuration="2.789894705s" podCreationTimestamp="2026-03-19 09:52:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:13.77430247 +0000 UTC m=+1118.695880162" watchObservedRunningTime="2026-03-19 09:52:13.789894705 +0000 UTC m=+1118.711472397" Mar 19 09:52:14.087333 master-0 kubenswrapper[27819]: I0319 09:52:14.087215 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 19 09:52:14.173182 master-0 kubenswrapper[27819]: I0319 09:52:14.173130 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 19 09:52:14.805904 master-0 kubenswrapper[27819]: I0319 09:52:14.805858 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 19 09:52:15.783514 master-0 kubenswrapper[27819]: I0319 09:52:15.783450 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 19 09:52:15.948484 master-0 kubenswrapper[27819]: I0319 09:52:15.948420 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 19 09:52:18.499266 master-0 kubenswrapper[27819]: I0319 09:52:18.499198 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 09:52:18.533852 master-0 kubenswrapper[27819]: I0319 09:52:18.531596 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 09:52:18.846628 master-0 kubenswrapper[27819]: I0319 09:52:18.846588 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 09:52:19.066294 master-0 kubenswrapper[27819]: I0319 09:52:19.066228 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 09:52:19.066652 master-0 kubenswrapper[27819]: I0319 09:52:19.066632 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 09:52:20.148948 master-0 kubenswrapper[27819]: I0319 09:52:20.148852 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:20.149477 master-0 kubenswrapper[27819]: I0319 09:52:20.148884 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:20.694517 master-0 kubenswrapper[27819]: I0319 09:52:20.694407 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:20.743364 master-0 kubenswrapper[27819]: I0319 09:52:20.742945 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-combined-ca-bundle\") pod \"ca8a1d31-9568-44f1-8642-12df8d02b456\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " Mar 19 09:52:20.743364 master-0 kubenswrapper[27819]: I0319 09:52:20.743060 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-config-data\") pod \"ca8a1d31-9568-44f1-8642-12df8d02b456\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " Mar 19 09:52:20.743364 master-0 kubenswrapper[27819]: I0319 09:52:20.743100 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvhdq\" (UniqueName: \"kubernetes.io/projected/ca8a1d31-9568-44f1-8642-12df8d02b456-kube-api-access-vvhdq\") pod \"ca8a1d31-9568-44f1-8642-12df8d02b456\" (UID: \"ca8a1d31-9568-44f1-8642-12df8d02b456\") " Mar 19 09:52:20.760417 master-0 kubenswrapper[27819]: I0319 09:52:20.758165 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8a1d31-9568-44f1-8642-12df8d02b456-kube-api-access-vvhdq" (OuterVolumeSpecName: "kube-api-access-vvhdq") pod "ca8a1d31-9568-44f1-8642-12df8d02b456" (UID: "ca8a1d31-9568-44f1-8642-12df8d02b456"). InnerVolumeSpecName "kube-api-access-vvhdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:20.777274 master-0 kubenswrapper[27819]: I0319 09:52:20.777100 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-config-data" (OuterVolumeSpecName: "config-data") pod "ca8a1d31-9568-44f1-8642-12df8d02b456" (UID: "ca8a1d31-9568-44f1-8642-12df8d02b456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:20.781629 master-0 kubenswrapper[27819]: I0319 09:52:20.781455 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca8a1d31-9568-44f1-8642-12df8d02b456" (UID: "ca8a1d31-9568-44f1-8642-12df8d02b456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:20.841499 master-0 kubenswrapper[27819]: I0319 09:52:20.841344 27819 generic.go:334] "Generic (PLEG): container finished" podID="ca8a1d31-9568-44f1-8642-12df8d02b456" containerID="85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7" exitCode=137 Mar 19 09:52:20.841499 master-0 kubenswrapper[27819]: I0319 09:52:20.841413 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca8a1d31-9568-44f1-8642-12df8d02b456","Type":"ContainerDied","Data":"85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7"} Mar 19 09:52:20.841499 master-0 kubenswrapper[27819]: I0319 09:52:20.841449 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ca8a1d31-9568-44f1-8642-12df8d02b456","Type":"ContainerDied","Data":"1cfe40b806b46c0c89235cf75de691c359f87bf9201d354f2aa0ee69d55cf2c4"} Mar 19 09:52:20.841499 master-0 kubenswrapper[27819]: I0319 09:52:20.841471 27819 scope.go:117] "RemoveContainer" containerID="85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7" Mar 19 09:52:20.841856 master-0 kubenswrapper[27819]: I0319 09:52:20.841665 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:20.849320 master-0 kubenswrapper[27819]: I0319 09:52:20.848894 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:20.849320 master-0 kubenswrapper[27819]: I0319 09:52:20.848937 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca8a1d31-9568-44f1-8642-12df8d02b456-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:20.849320 master-0 kubenswrapper[27819]: I0319 09:52:20.848951 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvhdq\" (UniqueName: \"kubernetes.io/projected/ca8a1d31-9568-44f1-8642-12df8d02b456-kube-api-access-vvhdq\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:20.930788 master-0 kubenswrapper[27819]: I0319 09:52:20.930607 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:52:20.963945 master-0 kubenswrapper[27819]: I0319 09:52:20.963714 27819 scope.go:117] "RemoveContainer" containerID="85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7" Mar 19 09:52:20.964723 master-0 kubenswrapper[27819]: E0319 09:52:20.964176 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7\": container with ID starting with 85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7 not found: ID does not exist" containerID="85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7" Mar 19 09:52:20.964723 master-0 kubenswrapper[27819]: I0319 09:52:20.964215 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7"} err="failed to get container status \"85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7\": rpc error: code = NotFound desc = could not find container \"85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7\": container with ID starting with 85696e9d31072ef640de1eb8557718a5ef852a61f9e982b4e9819be2b89d0dd7 not found: ID does not exist" Mar 19 09:52:20.966994 master-0 kubenswrapper[27819]: I0319 09:52:20.966946 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:52:20.980201 master-0 kubenswrapper[27819]: I0319 09:52:20.980107 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:52:20.980762 master-0 kubenswrapper[27819]: E0319 09:52:20.980738 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8a1d31-9568-44f1-8642-12df8d02b456" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 09:52:20.980762 master-0 kubenswrapper[27819]: I0319 09:52:20.980763 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8a1d31-9568-44f1-8642-12df8d02b456" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 09:52:20.981140 master-0 kubenswrapper[27819]: I0319 09:52:20.981108 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8a1d31-9568-44f1-8642-12df8d02b456" containerName="nova-cell1-novncproxy-novncproxy" Mar 19 09:52:20.982011 master-0 kubenswrapper[27819]: I0319 09:52:20.981982 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:20.984356 master-0 kubenswrapper[27819]: I0319 09:52:20.984318 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 19 09:52:20.985344 master-0 kubenswrapper[27819]: I0319 09:52:20.984477 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 19 09:52:20.985344 master-0 kubenswrapper[27819]: I0319 09:52:20.984630 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 19 09:52:20.995961 master-0 kubenswrapper[27819]: I0319 09:52:20.995909 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:52:21.066748 master-0 kubenswrapper[27819]: I0319 09:52:21.066689 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx5gw\" (UniqueName: \"kubernetes.io/projected/2627e86a-b91f-43f9-a579-2b0406d4d628-kube-api-access-vx5gw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.066992 master-0 kubenswrapper[27819]: I0319 09:52:21.066772 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.066992 master-0 kubenswrapper[27819]: I0319 09:52:21.066809 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.066992 master-0 kubenswrapper[27819]: I0319 09:52:21.066858 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.067137 master-0 kubenswrapper[27819]: I0319 09:52:21.067097 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.169064 master-0 kubenswrapper[27819]: I0319 09:52:21.168976 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx5gw\" (UniqueName: \"kubernetes.io/projected/2627e86a-b91f-43f9-a579-2b0406d4d628-kube-api-access-vx5gw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.169064 master-0 kubenswrapper[27819]: I0319 09:52:21.169059 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.169711 master-0 kubenswrapper[27819]: I0319 09:52:21.169087 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.169711 master-0 kubenswrapper[27819]: I0319 09:52:21.169121 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.169711 master-0 kubenswrapper[27819]: I0319 09:52:21.169284 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.173082 master-0 kubenswrapper[27819]: I0319 09:52:21.173042 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.173609 master-0 kubenswrapper[27819]: I0319 09:52:21.173567 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.173699 master-0 kubenswrapper[27819]: I0319 09:52:21.173636 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.175312 master-0 kubenswrapper[27819]: I0319 09:52:21.175271 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2627e86a-b91f-43f9-a579-2b0406d4d628-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.190750 master-0 kubenswrapper[27819]: I0319 09:52:21.190704 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx5gw\" (UniqueName: \"kubernetes.io/projected/2627e86a-b91f-43f9-a579-2b0406d4d628-kube-api-access-vx5gw\") pod \"nova-cell1-novncproxy-0\" (UID: \"2627e86a-b91f-43f9-a579-2b0406d4d628\") " pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.304675 master-0 kubenswrapper[27819]: I0319 09:52:21.304522 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca8a1d31-9568-44f1-8642-12df8d02b456" path="/var/lib/kubelet/pods/ca8a1d31-9568-44f1-8642-12df8d02b456/volumes" Mar 19 09:52:21.305718 master-0 kubenswrapper[27819]: I0319 09:52:21.305679 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:21.812109 master-0 kubenswrapper[27819]: I0319 09:52:21.808995 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 19 09:52:21.855791 master-0 kubenswrapper[27819]: I0319 09:52:21.855304 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2627e86a-b91f-43f9-a579-2b0406d4d628","Type":"ContainerStarted","Data":"22370e67dbf29a2d22b7689cbbd53a0983bdb119d61ee46fc2e7107f1276c0e9"} Mar 19 09:52:22.386889 master-0 kubenswrapper[27819]: I0319 09:52:22.386743 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 09:52:22.387584 master-0 kubenswrapper[27819]: I0319 09:52:22.387569 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 09:52:22.872583 master-0 kubenswrapper[27819]: I0319 09:52:22.872516 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2627e86a-b91f-43f9-a579-2b0406d4d628","Type":"ContainerStarted","Data":"adc4ef433309aaa7c019df2ce7947309f3ab92b8a7164ff13dc8a0069c71bc75"} Mar 19 09:52:22.895314 master-0 kubenswrapper[27819]: I0319 09:52:22.895238 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.8952208109999997 podStartE2EDuration="2.895220811s" podCreationTimestamp="2026-03-19 09:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:22.890988812 +0000 UTC m=+1127.812566504" watchObservedRunningTime="2026-03-19 09:52:22.895220811 +0000 UTC m=+1127.816798503" Mar 19 09:52:23.433869 master-0 kubenswrapper[27819]: I0319 09:52:23.433786 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.5:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:23.434530 master-0 kubenswrapper[27819]: I0319 09:52:23.433837 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.5:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:26.306915 master-0 kubenswrapper[27819]: I0319 09:52:26.306843 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:27.066766 master-0 kubenswrapper[27819]: I0319 09:52:27.066702 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 09:52:27.067151 master-0 kubenswrapper[27819]: I0319 09:52:27.066875 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 09:52:29.070788 master-0 kubenswrapper[27819]: I0319 09:52:29.070712 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 09:52:29.071600 master-0 kubenswrapper[27819]: I0319 09:52:29.071052 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 09:52:29.074382 master-0 kubenswrapper[27819]: I0319 09:52:29.074343 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 09:52:29.951518 master-0 kubenswrapper[27819]: I0319 09:52:29.951441 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 09:52:30.386557 master-0 kubenswrapper[27819]: I0319 09:52:30.386494 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 09:52:30.387246 master-0 kubenswrapper[27819]: I0319 09:52:30.386891 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 09:52:30.450580 master-0 kubenswrapper[27819]: I0319 09:52:30.450005 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb5c99df-95vwz"] Mar 19 09:52:30.453411 master-0 kubenswrapper[27819]: I0319 09:52:30.452468 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.468564 master-0 kubenswrapper[27819]: I0319 09:52:30.464077 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb5c99df-95vwz"] Mar 19 09:52:30.507570 master-0 kubenswrapper[27819]: I0319 09:52:30.504137 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-config\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.507570 master-0 kubenswrapper[27819]: I0319 09:52:30.504237 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck6lf\" (UniqueName: \"kubernetes.io/projected/438f502b-dac1-4794-af36-c1779196e21c-kube-api-access-ck6lf\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.507570 master-0 kubenswrapper[27819]: I0319 09:52:30.504292 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.507570 master-0 kubenswrapper[27819]: I0319 09:52:30.504377 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.507570 master-0 kubenswrapper[27819]: I0319 09:52:30.504416 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.507570 master-0 kubenswrapper[27819]: I0319 09:52:30.504441 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-dns-svc\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.606568 master-0 kubenswrapper[27819]: I0319 09:52:30.606281 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.606568 master-0 kubenswrapper[27819]: I0319 09:52:30.606380 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.606568 master-0 kubenswrapper[27819]: I0319 09:52:30.606423 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-dns-svc\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.606568 master-0 kubenswrapper[27819]: I0319 09:52:30.606525 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-config\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.606959 master-0 kubenswrapper[27819]: I0319 09:52:30.606676 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck6lf\" (UniqueName: \"kubernetes.io/projected/438f502b-dac1-4794-af36-c1779196e21c-kube-api-access-ck6lf\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.606959 master-0 kubenswrapper[27819]: I0319 09:52:30.606751 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.609625 master-0 kubenswrapper[27819]: I0319 09:52:30.607561 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-ovsdbserver-sb\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.609625 master-0 kubenswrapper[27819]: I0319 09:52:30.607793 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-ovsdbserver-nb\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.609625 master-0 kubenswrapper[27819]: I0319 09:52:30.608507 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-dns-swift-storage-0\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.609625 master-0 kubenswrapper[27819]: I0319 09:52:30.608857 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-dns-svc\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.609625 master-0 kubenswrapper[27819]: I0319 09:52:30.609427 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/438f502b-dac1-4794-af36-c1779196e21c-config\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.631671 master-0 kubenswrapper[27819]: I0319 09:52:30.627700 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck6lf\" (UniqueName: \"kubernetes.io/projected/438f502b-dac1-4794-af36-c1779196e21c-kube-api-access-ck6lf\") pod \"dnsmasq-dns-bb5c99df-95vwz\" (UID: \"438f502b-dac1-4794-af36-c1779196e21c\") " pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:30.811622 master-0 kubenswrapper[27819]: I0319 09:52:30.811485 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:31.306916 master-0 kubenswrapper[27819]: I0319 09:52:31.306797 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:31.354569 master-0 kubenswrapper[27819]: I0319 09:52:31.354503 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:31.544092 master-0 kubenswrapper[27819]: W0319 09:52:31.544032 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod438f502b_dac1_4794_af36_c1779196e21c.slice/crio-31b97f67378921950a531298a91f3200a81fcb1b4d5c04fe7ca11010cef45262 WatchSource:0}: Error finding container 31b97f67378921950a531298a91f3200a81fcb1b4d5c04fe7ca11010cef45262: Status 404 returned error can't find the container with id 31b97f67378921950a531298a91f3200a81fcb1b4d5c04fe7ca11010cef45262 Mar 19 09:52:31.554326 master-0 kubenswrapper[27819]: I0319 09:52:31.554275 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb5c99df-95vwz"] Mar 19 09:52:32.050563 master-0 kubenswrapper[27819]: I0319 09:52:32.050058 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" event={"ID":"438f502b-dac1-4794-af36-c1779196e21c","Type":"ContainerStarted","Data":"31a469027ae3d564a3a7ebf9bc2393ac8d6a3874702cbf131e8b60a10be28bf1"} Mar 19 09:52:32.050563 master-0 kubenswrapper[27819]: I0319 09:52:32.050107 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" event={"ID":"438f502b-dac1-4794-af36-c1779196e21c","Type":"ContainerStarted","Data":"31b97f67378921950a531298a91f3200a81fcb1b4d5c04fe7ca11010cef45262"} Mar 19 09:52:32.125244 master-0 kubenswrapper[27819]: I0319 09:52:32.125111 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 19 09:52:32.396584 master-0 kubenswrapper[27819]: I0319 09:52:32.396431 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 09:52:32.401501 master-0 kubenswrapper[27819]: I0319 09:52:32.400835 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 09:52:32.406356 master-0 kubenswrapper[27819]: I0319 09:52:32.406014 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 09:52:32.520164 master-0 kubenswrapper[27819]: I0319 09:52:32.520119 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-llz6n"] Mar 19 09:52:32.522132 master-0 kubenswrapper[27819]: I0319 09:52:32.522111 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.526197 master-0 kubenswrapper[27819]: I0319 09:52:32.523903 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 19 09:52:32.526197 master-0 kubenswrapper[27819]: I0319 09:52:32.524567 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 19 09:52:32.563213 master-0 kubenswrapper[27819]: I0319 09:52:32.560526 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.563213 master-0 kubenswrapper[27819]: I0319 09:52:32.560620 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-config-data\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.563213 master-0 kubenswrapper[27819]: I0319 09:52:32.560662 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-scripts\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.563213 master-0 kubenswrapper[27819]: I0319 09:52:32.560691 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756s9\" (UniqueName: \"kubernetes.io/projected/2da9159b-7125-4a77-ae29-410332c3be7c-kube-api-access-756s9\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.564969 master-0 kubenswrapper[27819]: I0319 09:52:32.564926 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-llz6n"] Mar 19 09:52:32.601814 master-0 kubenswrapper[27819]: I0319 09:52:32.601569 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-fqjbw"] Mar 19 09:52:32.604584 master-0 kubenswrapper[27819]: I0319 09:52:32.604556 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.614702 master-0 kubenswrapper[27819]: I0319 09:52:32.614618 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-fqjbw"] Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668040 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668144 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-config-data\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668195 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-scripts\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668230 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756s9\" (UniqueName: \"kubernetes.io/projected/2da9159b-7125-4a77-ae29-410332c3be7c-kube-api-access-756s9\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668270 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-config-data\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668300 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-combined-ca-bundle\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668397 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zgm2\" (UniqueName: \"kubernetes.io/projected/c5a640c3-526f-4686-be45-1e77d19d22e5-kube-api-access-5zgm2\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.676448 master-0 kubenswrapper[27819]: I0319 09:52:32.668452 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-scripts\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.680610 master-0 kubenswrapper[27819]: I0319 09:52:32.679691 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.681195 master-0 kubenswrapper[27819]: I0319 09:52:32.681156 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-config-data\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.683494 master-0 kubenswrapper[27819]: I0319 09:52:32.682488 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-scripts\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.694701 master-0 kubenswrapper[27819]: I0319 09:52:32.690014 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756s9\" (UniqueName: \"kubernetes.io/projected/2da9159b-7125-4a77-ae29-410332c3be7c-kube-api-access-756s9\") pod \"nova-cell1-cell-mapping-llz6n\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.771719 master-0 kubenswrapper[27819]: I0319 09:52:32.770754 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-config-data\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.771719 master-0 kubenswrapper[27819]: I0319 09:52:32.770815 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-combined-ca-bundle\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.771719 master-0 kubenswrapper[27819]: I0319 09:52:32.770922 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zgm2\" (UniqueName: \"kubernetes.io/projected/c5a640c3-526f-4686-be45-1e77d19d22e5-kube-api-access-5zgm2\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.771719 master-0 kubenswrapper[27819]: I0319 09:52:32.770964 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-scripts\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.774501 master-0 kubenswrapper[27819]: I0319 09:52:32.774274 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-config-data\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.774967 master-0 kubenswrapper[27819]: I0319 09:52:32.774768 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-combined-ca-bundle\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.776118 master-0 kubenswrapper[27819]: I0319 09:52:32.776079 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-scripts\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.797663 master-0 kubenswrapper[27819]: I0319 09:52:32.796403 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zgm2\" (UniqueName: \"kubernetes.io/projected/c5a640c3-526f-4686-be45-1e77d19d22e5-kube-api-access-5zgm2\") pod \"nova-cell1-host-discover-fqjbw\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:32.868621 master-0 kubenswrapper[27819]: I0319 09:52:32.868505 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:32.943154 master-0 kubenswrapper[27819]: I0319 09:52:32.943019 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:33.065169 master-0 kubenswrapper[27819]: I0319 09:52:33.065089 27819 generic.go:334] "Generic (PLEG): container finished" podID="438f502b-dac1-4794-af36-c1779196e21c" containerID="31a469027ae3d564a3a7ebf9bc2393ac8d6a3874702cbf131e8b60a10be28bf1" exitCode=0 Mar 19 09:52:33.065419 master-0 kubenswrapper[27819]: I0319 09:52:33.065273 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" event={"ID":"438f502b-dac1-4794-af36-c1779196e21c","Type":"ContainerDied","Data":"31a469027ae3d564a3a7ebf9bc2393ac8d6a3874702cbf131e8b60a10be28bf1"} Mar 19 09:52:33.077831 master-0 kubenswrapper[27819]: I0319 09:52:33.077768 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 09:52:33.423876 master-0 kubenswrapper[27819]: I0319 09:52:33.423791 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-llz6n"] Mar 19 09:52:33.772902 master-0 kubenswrapper[27819]: I0319 09:52:33.771771 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-fqjbw"] Mar 19 09:52:34.081255 master-0 kubenswrapper[27819]: I0319 09:52:34.081185 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" event={"ID":"438f502b-dac1-4794-af36-c1779196e21c","Type":"ContainerStarted","Data":"2ab7deebe3964592cd765099017a7b2ee8228461a800ee28c75fd257edf2fed5"} Mar 19 09:52:34.081594 master-0 kubenswrapper[27819]: I0319 09:52:34.081488 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:34.085211 master-0 kubenswrapper[27819]: I0319 09:52:34.084958 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-fqjbw" event={"ID":"c5a640c3-526f-4686-be45-1e77d19d22e5","Type":"ContainerStarted","Data":"b09ec752e758d6653ed9b6cadec20d81f7389748bcb277605dd2288b651a6ca9"} Mar 19 09:52:34.085211 master-0 kubenswrapper[27819]: I0319 09:52:34.085026 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-fqjbw" event={"ID":"c5a640c3-526f-4686-be45-1e77d19d22e5","Type":"ContainerStarted","Data":"7ff03840df70ba1fff3b03b2778d3269aed65fcb2754f6e98f9cc2afc686e325"} Mar 19 09:52:34.094712 master-0 kubenswrapper[27819]: I0319 09:52:34.094623 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-llz6n" event={"ID":"2da9159b-7125-4a77-ae29-410332c3be7c","Type":"ContainerStarted","Data":"66b35d3cda789cf6068630ed695438ae03f27b1d4d4c45f188ba0c93b84638a9"} Mar 19 09:52:34.094920 master-0 kubenswrapper[27819]: I0319 09:52:34.094728 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-llz6n" event={"ID":"2da9159b-7125-4a77-ae29-410332c3be7c","Type":"ContainerStarted","Data":"d82fcd02a52cdf8801d80658a414670d37232a2aff5c8eae178086391e851705"} Mar 19 09:52:34.111228 master-0 kubenswrapper[27819]: I0319 09:52:34.111122 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" podStartSLOduration=4.11109222 podStartE2EDuration="4.11109222s" podCreationTimestamp="2026-03-19 09:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:34.104495129 +0000 UTC m=+1139.026072841" watchObservedRunningTime="2026-03-19 09:52:34.11109222 +0000 UTC m=+1139.032669912" Mar 19 09:52:34.130901 master-0 kubenswrapper[27819]: I0319 09:52:34.130787 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-fqjbw" podStartSLOduration=2.130760989 podStartE2EDuration="2.130760989s" podCreationTimestamp="2026-03-19 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:34.120140694 +0000 UTC m=+1139.041718396" watchObservedRunningTime="2026-03-19 09:52:34.130760989 +0000 UTC m=+1139.052338681" Mar 19 09:52:34.171395 master-0 kubenswrapper[27819]: I0319 09:52:34.171219 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-llz6n" podStartSLOduration=2.171196426 podStartE2EDuration="2.171196426s" podCreationTimestamp="2026-03-19 09:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:34.158644512 +0000 UTC m=+1139.080222204" watchObservedRunningTime="2026-03-19 09:52:34.171196426 +0000 UTC m=+1139.092774118" Mar 19 09:52:34.296410 master-0 kubenswrapper[27819]: I0319 09:52:34.296272 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:34.296636 master-0 kubenswrapper[27819]: I0319 09:52:34.296535 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-log" containerID="cri-o://97b734b20cfb4ff1883fd921382a0c940df7a6d3b6dca630e4f8ef724ada9123" gracePeriod=30 Mar 19 09:52:34.296719 master-0 kubenswrapper[27819]: I0319 09:52:34.296655 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-api" containerID="cri-o://2dbd62e4b02e3ca11349164ed81ff9f7312fc1c2636e9694f8da5359f62e2af9" gracePeriod=30 Mar 19 09:52:35.111584 master-0 kubenswrapper[27819]: I0319 09:52:35.110535 27819 generic.go:334] "Generic (PLEG): container finished" podID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerID="97b734b20cfb4ff1883fd921382a0c940df7a6d3b6dca630e4f8ef724ada9123" exitCode=143 Mar 19 09:52:35.111584 master-0 kubenswrapper[27819]: I0319 09:52:35.110585 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"550c9a40-cbc7-4df8-aa14-ccc4e43c096d","Type":"ContainerDied","Data":"97b734b20cfb4ff1883fd921382a0c940df7a6d3b6dca630e4f8ef724ada9123"} Mar 19 09:52:37.142159 master-0 kubenswrapper[27819]: I0319 09:52:37.142028 27819 generic.go:334] "Generic (PLEG): container finished" podID="c5a640c3-526f-4686-be45-1e77d19d22e5" containerID="b09ec752e758d6653ed9b6cadec20d81f7389748bcb277605dd2288b651a6ca9" exitCode=0 Mar 19 09:52:37.142856 master-0 kubenswrapper[27819]: I0319 09:52:37.142110 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-fqjbw" event={"ID":"c5a640c3-526f-4686-be45-1e77d19d22e5","Type":"ContainerDied","Data":"b09ec752e758d6653ed9b6cadec20d81f7389748bcb277605dd2288b651a6ca9"} Mar 19 09:52:38.156789 master-0 kubenswrapper[27819]: I0319 09:52:38.156729 27819 generic.go:334] "Generic (PLEG): container finished" podID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerID="2dbd62e4b02e3ca11349164ed81ff9f7312fc1c2636e9694f8da5359f62e2af9" exitCode=0 Mar 19 09:52:38.158029 master-0 kubenswrapper[27819]: I0319 09:52:38.157990 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"550c9a40-cbc7-4df8-aa14-ccc4e43c096d","Type":"ContainerDied","Data":"2dbd62e4b02e3ca11349164ed81ff9f7312fc1c2636e9694f8da5359f62e2af9"} Mar 19 09:52:38.524822 master-0 kubenswrapper[27819]: I0319 09:52:38.524762 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:38.660826 master-0 kubenswrapper[27819]: I0319 09:52:38.660650 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-config-data\") pod \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " Mar 19 09:52:38.660826 master-0 kubenswrapper[27819]: I0319 09:52:38.660696 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-combined-ca-bundle\") pod \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " Mar 19 09:52:38.660826 master-0 kubenswrapper[27819]: I0319 09:52:38.660766 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-logs\") pod \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " Mar 19 09:52:38.661121 master-0 kubenswrapper[27819]: I0319 09:52:38.661078 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lx42\" (UniqueName: \"kubernetes.io/projected/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-kube-api-access-2lx42\") pod \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\" (UID: \"550c9a40-cbc7-4df8-aa14-ccc4e43c096d\") " Mar 19 09:52:38.661440 master-0 kubenswrapper[27819]: I0319 09:52:38.661399 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-logs" (OuterVolumeSpecName: "logs") pod "550c9a40-cbc7-4df8-aa14-ccc4e43c096d" (UID: "550c9a40-cbc7-4df8-aa14-ccc4e43c096d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:52:38.662059 master-0 kubenswrapper[27819]: I0319 09:52:38.661978 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:38.665628 master-0 kubenswrapper[27819]: I0319 09:52:38.665583 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-kube-api-access-2lx42" (OuterVolumeSpecName: "kube-api-access-2lx42") pod "550c9a40-cbc7-4df8-aa14-ccc4e43c096d" (UID: "550c9a40-cbc7-4df8-aa14-ccc4e43c096d"). InnerVolumeSpecName "kube-api-access-2lx42". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:38.667889 master-0 kubenswrapper[27819]: I0319 09:52:38.667828 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:38.709626 master-0 kubenswrapper[27819]: I0319 09:52:38.702511 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-config-data" (OuterVolumeSpecName: "config-data") pod "550c9a40-cbc7-4df8-aa14-ccc4e43c096d" (UID: "550c9a40-cbc7-4df8-aa14-ccc4e43c096d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:38.726746 master-0 kubenswrapper[27819]: I0319 09:52:38.726687 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "550c9a40-cbc7-4df8-aa14-ccc4e43c096d" (UID: "550c9a40-cbc7-4df8-aa14-ccc4e43c096d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:38.771531 master-0 kubenswrapper[27819]: I0319 09:52:38.763110 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-scripts\") pod \"c5a640c3-526f-4686-be45-1e77d19d22e5\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " Mar 19 09:52:38.771531 master-0 kubenswrapper[27819]: I0319 09:52:38.763297 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zgm2\" (UniqueName: \"kubernetes.io/projected/c5a640c3-526f-4686-be45-1e77d19d22e5-kube-api-access-5zgm2\") pod \"c5a640c3-526f-4686-be45-1e77d19d22e5\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " Mar 19 09:52:38.771531 master-0 kubenswrapper[27819]: I0319 09:52:38.763415 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-combined-ca-bundle\") pod \"c5a640c3-526f-4686-be45-1e77d19d22e5\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " Mar 19 09:52:38.771531 master-0 kubenswrapper[27819]: I0319 09:52:38.763452 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-config-data\") pod \"c5a640c3-526f-4686-be45-1e77d19d22e5\" (UID: \"c5a640c3-526f-4686-be45-1e77d19d22e5\") " Mar 19 09:52:38.771531 master-0 kubenswrapper[27819]: I0319 09:52:38.764468 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:38.771531 master-0 kubenswrapper[27819]: I0319 09:52:38.764496 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:38.771531 master-0 kubenswrapper[27819]: I0319 09:52:38.764510 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lx42\" (UniqueName: \"kubernetes.io/projected/550c9a40-cbc7-4df8-aa14-ccc4e43c096d-kube-api-access-2lx42\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:38.774022 master-0 kubenswrapper[27819]: I0319 09:52:38.773961 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-scripts" (OuterVolumeSpecName: "scripts") pod "c5a640c3-526f-4686-be45-1e77d19d22e5" (UID: "c5a640c3-526f-4686-be45-1e77d19d22e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:38.777368 master-0 kubenswrapper[27819]: I0319 09:52:38.777319 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a640c3-526f-4686-be45-1e77d19d22e5-kube-api-access-5zgm2" (OuterVolumeSpecName: "kube-api-access-5zgm2") pod "c5a640c3-526f-4686-be45-1e77d19d22e5" (UID: "c5a640c3-526f-4686-be45-1e77d19d22e5"). InnerVolumeSpecName "kube-api-access-5zgm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:38.800737 master-0 kubenswrapper[27819]: I0319 09:52:38.800680 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5a640c3-526f-4686-be45-1e77d19d22e5" (UID: "c5a640c3-526f-4686-be45-1e77d19d22e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:38.807951 master-0 kubenswrapper[27819]: I0319 09:52:38.807895 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-config-data" (OuterVolumeSpecName: "config-data") pod "c5a640c3-526f-4686-be45-1e77d19d22e5" (UID: "c5a640c3-526f-4686-be45-1e77d19d22e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:38.868861 master-0 kubenswrapper[27819]: I0319 09:52:38.868809 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zgm2\" (UniqueName: \"kubernetes.io/projected/c5a640c3-526f-4686-be45-1e77d19d22e5-kube-api-access-5zgm2\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:38.868861 master-0 kubenswrapper[27819]: I0319 09:52:38.868846 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:38.868861 master-0 kubenswrapper[27819]: I0319 09:52:38.868855 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:38.868861 master-0 kubenswrapper[27819]: I0319 09:52:38.868865 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5a640c3-526f-4686-be45-1e77d19d22e5-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:39.170928 master-0 kubenswrapper[27819]: I0319 09:52:39.170857 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"550c9a40-cbc7-4df8-aa14-ccc4e43c096d","Type":"ContainerDied","Data":"5e778eb57b6246a94489e8c85beca7a5efa25be59d7ad726d664b07b635044cc"} Mar 19 09:52:39.170928 master-0 kubenswrapper[27819]: I0319 09:52:39.170904 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:39.170928 master-0 kubenswrapper[27819]: I0319 09:52:39.170920 27819 scope.go:117] "RemoveContainer" containerID="2dbd62e4b02e3ca11349164ed81ff9f7312fc1c2636e9694f8da5359f62e2af9" Mar 19 09:52:39.173030 master-0 kubenswrapper[27819]: I0319 09:52:39.172994 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-fqjbw" event={"ID":"c5a640c3-526f-4686-be45-1e77d19d22e5","Type":"ContainerDied","Data":"7ff03840df70ba1fff3b03b2778d3269aed65fcb2754f6e98f9cc2afc686e325"} Mar 19 09:52:39.173030 master-0 kubenswrapper[27819]: I0319 09:52:39.173025 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff03840df70ba1fff3b03b2778d3269aed65fcb2754f6e98f9cc2afc686e325" Mar 19 09:52:39.173162 master-0 kubenswrapper[27819]: I0319 09:52:39.173071 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-fqjbw" Mar 19 09:52:39.176483 master-0 kubenswrapper[27819]: I0319 09:52:39.176437 27819 generic.go:334] "Generic (PLEG): container finished" podID="2da9159b-7125-4a77-ae29-410332c3be7c" containerID="66b35d3cda789cf6068630ed695438ae03f27b1d4d4c45f188ba0c93b84638a9" exitCode=0 Mar 19 09:52:39.176661 master-0 kubenswrapper[27819]: I0319 09:52:39.176623 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-llz6n" event={"ID":"2da9159b-7125-4a77-ae29-410332c3be7c","Type":"ContainerDied","Data":"66b35d3cda789cf6068630ed695438ae03f27b1d4d4c45f188ba0c93b84638a9"} Mar 19 09:52:39.192071 master-0 kubenswrapper[27819]: I0319 09:52:39.191617 27819 scope.go:117] "RemoveContainer" containerID="97b734b20cfb4ff1883fd921382a0c940df7a6d3b6dca630e4f8ef724ada9123" Mar 19 09:52:39.298997 master-0 kubenswrapper[27819]: I0319 09:52:39.298914 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:39.333804 master-0 kubenswrapper[27819]: I0319 09:52:39.333738 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:39.359029 master-0 kubenswrapper[27819]: I0319 09:52:39.358983 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:39.360214 master-0 kubenswrapper[27819]: E0319 09:52:39.360066 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a640c3-526f-4686-be45-1e77d19d22e5" containerName="nova-manage" Mar 19 09:52:39.360382 master-0 kubenswrapper[27819]: I0319 09:52:39.360369 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a640c3-526f-4686-be45-1e77d19d22e5" containerName="nova-manage" Mar 19 09:52:39.364428 master-0 kubenswrapper[27819]: E0319 09:52:39.360800 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-api" Mar 19 09:52:39.364428 master-0 kubenswrapper[27819]: I0319 09:52:39.362637 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-api" Mar 19 09:52:39.364428 master-0 kubenswrapper[27819]: E0319 09:52:39.362667 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-log" Mar 19 09:52:39.364428 master-0 kubenswrapper[27819]: I0319 09:52:39.362674 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-log" Mar 19 09:52:39.364428 master-0 kubenswrapper[27819]: I0319 09:52:39.363026 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-log" Mar 19 09:52:39.364428 master-0 kubenswrapper[27819]: I0319 09:52:39.363070 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" containerName="nova-api-api" Mar 19 09:52:39.364428 master-0 kubenswrapper[27819]: I0319 09:52:39.363085 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a640c3-526f-4686-be45-1e77d19d22e5" containerName="nova-manage" Mar 19 09:52:39.367355 master-0 kubenswrapper[27819]: I0319 09:52:39.364576 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:39.368383 master-0 kubenswrapper[27819]: I0319 09:52:39.368261 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 19 09:52:39.368727 master-0 kubenswrapper[27819]: I0319 09:52:39.368702 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 19 09:52:39.379154 master-0 kubenswrapper[27819]: I0319 09:52:39.372456 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:39.379386 master-0 kubenswrapper[27819]: I0319 09:52:39.379292 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 09:52:39.491175 master-0 kubenswrapper[27819]: I0319 09:52:39.491095 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-config-data\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.491175 master-0 kubenswrapper[27819]: I0319 09:52:39.491157 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.491454 master-0 kubenswrapper[27819]: I0319 09:52:39.491203 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.491454 master-0 kubenswrapper[27819]: I0319 09:52:39.491289 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mks48\" (UniqueName: \"kubernetes.io/projected/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-kube-api-access-mks48\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.491454 master-0 kubenswrapper[27819]: I0319 09:52:39.491397 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-logs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.491454 master-0 kubenswrapper[27819]: I0319 09:52:39.491423 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.593690 master-0 kubenswrapper[27819]: I0319 09:52:39.593630 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.593690 master-0 kubenswrapper[27819]: I0319 09:52:39.593693 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-config-data\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.593967 master-0 kubenswrapper[27819]: I0319 09:52:39.593873 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.594040 master-0 kubenswrapper[27819]: I0319 09:52:39.594016 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mks48\" (UniqueName: \"kubernetes.io/projected/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-kube-api-access-mks48\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.594184 master-0 kubenswrapper[27819]: I0319 09:52:39.594157 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-logs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.594228 master-0 kubenswrapper[27819]: I0319 09:52:39.594197 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.595052 master-0 kubenswrapper[27819]: I0319 09:52:39.595020 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-logs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.597524 master-0 kubenswrapper[27819]: I0319 09:52:39.597469 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.598140 master-0 kubenswrapper[27819]: I0319 09:52:39.598106 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.598719 master-0 kubenswrapper[27819]: I0319 09:52:39.598679 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-config-data\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.616849 master-0 kubenswrapper[27819]: I0319 09:52:39.616794 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-public-tls-certs\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.617580 master-0 kubenswrapper[27819]: I0319 09:52:39.617505 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mks48\" (UniqueName: \"kubernetes.io/projected/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-kube-api-access-mks48\") pod \"nova-api-0\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " pod="openstack/nova-api-0" Mar 19 09:52:39.764720 master-0 kubenswrapper[27819]: I0319 09:52:39.764569 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:40.275633 master-0 kubenswrapper[27819]: I0319 09:52:40.275524 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:40.285144 master-0 kubenswrapper[27819]: W0319 09:52:40.285092 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb4bc283_1922_4f11_b9f7_d875d8f2e99b.slice/crio-9689dd1076b1f67b36d31d06af3983d59f94d83adaeeeb95e38efed4544ff714 WatchSource:0}: Error finding container 9689dd1076b1f67b36d31d06af3983d59f94d83adaeeeb95e38efed4544ff714: Status 404 returned error can't find the container with id 9689dd1076b1f67b36d31d06af3983d59f94d83adaeeeb95e38efed4544ff714 Mar 19 09:52:40.812831 master-0 kubenswrapper[27819]: I0319 09:52:40.812716 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb5c99df-95vwz" Mar 19 09:52:40.829037 master-0 kubenswrapper[27819]: I0319 09:52:40.828985 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:40.926867 master-0 kubenswrapper[27819]: I0319 09:52:40.921601 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58968b8785-ggchc"] Mar 19 09:52:40.926867 master-0 kubenswrapper[27819]: I0319 09:52:40.921851 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58968b8785-ggchc" podUID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerName="dnsmasq-dns" containerID="cri-o://e04af2a5f34d973f3b08b43c387cb86c88f8e64559d556dd858ae12d70c87536" gracePeriod=10 Mar 19 09:52:40.929632 master-0 kubenswrapper[27819]: I0319 09:52:40.929366 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-combined-ca-bundle\") pod \"2da9159b-7125-4a77-ae29-410332c3be7c\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " Mar 19 09:52:40.929632 master-0 kubenswrapper[27819]: I0319 09:52:40.929494 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-scripts\") pod \"2da9159b-7125-4a77-ae29-410332c3be7c\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " Mar 19 09:52:40.929632 master-0 kubenswrapper[27819]: I0319 09:52:40.929549 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-config-data\") pod \"2da9159b-7125-4a77-ae29-410332c3be7c\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " Mar 19 09:52:40.929632 master-0 kubenswrapper[27819]: I0319 09:52:40.929589 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-756s9\" (UniqueName: \"kubernetes.io/projected/2da9159b-7125-4a77-ae29-410332c3be7c-kube-api-access-756s9\") pod \"2da9159b-7125-4a77-ae29-410332c3be7c\" (UID: \"2da9159b-7125-4a77-ae29-410332c3be7c\") " Mar 19 09:52:40.942998 master-0 kubenswrapper[27819]: I0319 09:52:40.942771 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da9159b-7125-4a77-ae29-410332c3be7c-kube-api-access-756s9" (OuterVolumeSpecName: "kube-api-access-756s9") pod "2da9159b-7125-4a77-ae29-410332c3be7c" (UID: "2da9159b-7125-4a77-ae29-410332c3be7c"). InnerVolumeSpecName "kube-api-access-756s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:40.951463 master-0 kubenswrapper[27819]: I0319 09:52:40.946864 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-scripts" (OuterVolumeSpecName: "scripts") pod "2da9159b-7125-4a77-ae29-410332c3be7c" (UID: "2da9159b-7125-4a77-ae29-410332c3be7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:40.977633 master-0 kubenswrapper[27819]: I0319 09:52:40.975162 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-config-data" (OuterVolumeSpecName: "config-data") pod "2da9159b-7125-4a77-ae29-410332c3be7c" (UID: "2da9159b-7125-4a77-ae29-410332c3be7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:41.021416 master-0 kubenswrapper[27819]: I0319 09:52:41.020981 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2da9159b-7125-4a77-ae29-410332c3be7c" (UID: "2da9159b-7125-4a77-ae29-410332c3be7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:41.033855 master-0 kubenswrapper[27819]: I0319 09:52:41.033784 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-756s9\" (UniqueName: \"kubernetes.io/projected/2da9159b-7125-4a77-ae29-410332c3be7c-kube-api-access-756s9\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:41.033943 master-0 kubenswrapper[27819]: I0319 09:52:41.033860 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:41.033943 master-0 kubenswrapper[27819]: I0319 09:52:41.033877 27819 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:41.033943 master-0 kubenswrapper[27819]: I0319 09:52:41.033890 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da9159b-7125-4a77-ae29-410332c3be7c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:41.256586 master-0 kubenswrapper[27819]: I0319 09:52:41.255560 27819 generic.go:334] "Generic (PLEG): container finished" podID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerID="e04af2a5f34d973f3b08b43c387cb86c88f8e64559d556dd858ae12d70c87536" exitCode=0 Mar 19 09:52:41.259827 master-0 kubenswrapper[27819]: I0319 09:52:41.255642 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58968b8785-ggchc" event={"ID":"a2a19f02-b040-4ac7-ba5e-40aab5169420","Type":"ContainerDied","Data":"e04af2a5f34d973f3b08b43c387cb86c88f8e64559d556dd858ae12d70c87536"} Mar 19 09:52:41.264581 master-0 kubenswrapper[27819]: I0319 09:52:41.261507 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb4bc283-1922-4f11-b9f7-d875d8f2e99b","Type":"ContainerStarted","Data":"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222"} Mar 19 09:52:41.264581 master-0 kubenswrapper[27819]: I0319 09:52:41.261592 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb4bc283-1922-4f11-b9f7-d875d8f2e99b","Type":"ContainerStarted","Data":"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c"} Mar 19 09:52:41.264581 master-0 kubenswrapper[27819]: I0319 09:52:41.261610 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb4bc283-1922-4f11-b9f7-d875d8f2e99b","Type":"ContainerStarted","Data":"9689dd1076b1f67b36d31d06af3983d59f94d83adaeeeb95e38efed4544ff714"} Mar 19 09:52:41.276245 master-0 kubenswrapper[27819]: I0319 09:52:41.265642 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-llz6n" event={"ID":"2da9159b-7125-4a77-ae29-410332c3be7c","Type":"ContainerDied","Data":"d82fcd02a52cdf8801d80658a414670d37232a2aff5c8eae178086391e851705"} Mar 19 09:52:41.276245 master-0 kubenswrapper[27819]: I0319 09:52:41.265684 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82fcd02a52cdf8801d80658a414670d37232a2aff5c8eae178086391e851705" Mar 19 09:52:41.276245 master-0 kubenswrapper[27819]: I0319 09:52:41.265734 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-llz6n" Mar 19 09:52:41.381716 master-0 kubenswrapper[27819]: I0319 09:52:41.368892 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.368846446 podStartE2EDuration="2.368846446s" podCreationTimestamp="2026-03-19 09:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:41.308728198 +0000 UTC m=+1146.230305910" watchObservedRunningTime="2026-03-19 09:52:41.368846446 +0000 UTC m=+1146.290424138" Mar 19 09:52:41.381716 master-0 kubenswrapper[27819]: I0319 09:52:41.380757 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550c9a40-cbc7-4df8-aa14-ccc4e43c096d" path="/var/lib/kubelet/pods/550c9a40-cbc7-4df8-aa14-ccc4e43c096d/volumes" Mar 19 09:52:41.482599 master-0 kubenswrapper[27819]: I0319 09:52:41.479649 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:41.482599 master-0 kubenswrapper[27819]: I0319 09:52:41.479915 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8cb2ca57-2118-4862-a72c-3cb12baf7972" containerName="nova-scheduler-scheduler" containerID="cri-o://a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05" gracePeriod=30 Mar 19 09:52:41.492581 master-0 kubenswrapper[27819]: I0319 09:52:41.491657 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:41.492581 master-0 kubenswrapper[27819]: I0319 09:52:41.491966 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-log" containerID="cri-o://0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e" gracePeriod=30 Mar 19 09:52:41.492938 master-0 kubenswrapper[27819]: I0319 09:52:41.492607 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-metadata" containerID="cri-o://adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c" gracePeriod=30 Mar 19 09:52:41.507834 master-0 kubenswrapper[27819]: I0319 09:52:41.507659 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:41.746853 master-0 kubenswrapper[27819]: I0319 09:52:41.746820 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:52:41.897678 master-0 kubenswrapper[27819]: I0319 09:52:41.897187 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-sb\") pod \"a2a19f02-b040-4ac7-ba5e-40aab5169420\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " Mar 19 09:52:41.897925 master-0 kubenswrapper[27819]: I0319 09:52:41.897837 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-config\") pod \"a2a19f02-b040-4ac7-ba5e-40aab5169420\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " Mar 19 09:52:41.898041 master-0 kubenswrapper[27819]: I0319 09:52:41.897978 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-nb\") pod \"a2a19f02-b040-4ac7-ba5e-40aab5169420\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " Mar 19 09:52:41.898103 master-0 kubenswrapper[27819]: I0319 09:52:41.898051 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jd8c\" (UniqueName: \"kubernetes.io/projected/a2a19f02-b040-4ac7-ba5e-40aab5169420-kube-api-access-9jd8c\") pod \"a2a19f02-b040-4ac7-ba5e-40aab5169420\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " Mar 19 09:52:41.898103 master-0 kubenswrapper[27819]: I0319 09:52:41.898080 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-swift-storage-0\") pod \"a2a19f02-b040-4ac7-ba5e-40aab5169420\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " Mar 19 09:52:41.898171 master-0 kubenswrapper[27819]: I0319 09:52:41.898120 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-svc\") pod \"a2a19f02-b040-4ac7-ba5e-40aab5169420\" (UID: \"a2a19f02-b040-4ac7-ba5e-40aab5169420\") " Mar 19 09:52:41.914953 master-0 kubenswrapper[27819]: I0319 09:52:41.907907 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2a19f02-b040-4ac7-ba5e-40aab5169420-kube-api-access-9jd8c" (OuterVolumeSpecName: "kube-api-access-9jd8c") pod "a2a19f02-b040-4ac7-ba5e-40aab5169420" (UID: "a2a19f02-b040-4ac7-ba5e-40aab5169420"). InnerVolumeSpecName "kube-api-access-9jd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:41.952340 master-0 kubenswrapper[27819]: I0319 09:52:41.952263 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2a19f02-b040-4ac7-ba5e-40aab5169420" (UID: "a2a19f02-b040-4ac7-ba5e-40aab5169420"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:41.953214 master-0 kubenswrapper[27819]: I0319 09:52:41.953168 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a2a19f02-b040-4ac7-ba5e-40aab5169420" (UID: "a2a19f02-b040-4ac7-ba5e-40aab5169420"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:41.962075 master-0 kubenswrapper[27819]: I0319 09:52:41.961991 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-config" (OuterVolumeSpecName: "config") pod "a2a19f02-b040-4ac7-ba5e-40aab5169420" (UID: "a2a19f02-b040-4ac7-ba5e-40aab5169420"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:41.970761 master-0 kubenswrapper[27819]: I0319 09:52:41.970670 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a2a19f02-b040-4ac7-ba5e-40aab5169420" (UID: "a2a19f02-b040-4ac7-ba5e-40aab5169420"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:41.975074 master-0 kubenswrapper[27819]: I0319 09:52:41.974987 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a2a19f02-b040-4ac7-ba5e-40aab5169420" (UID: "a2a19f02-b040-4ac7-ba5e-40aab5169420"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:52:42.003228 master-0 kubenswrapper[27819]: I0319 09:52:42.000644 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:42.003228 master-0 kubenswrapper[27819]: I0319 09:52:42.000694 27819 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:42.003228 master-0 kubenswrapper[27819]: I0319 09:52:42.000710 27819 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:42.003228 master-0 kubenswrapper[27819]: I0319 09:52:42.000724 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jd8c\" (UniqueName: \"kubernetes.io/projected/a2a19f02-b040-4ac7-ba5e-40aab5169420-kube-api-access-9jd8c\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:42.003228 master-0 kubenswrapper[27819]: I0319 09:52:42.000738 27819 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:42.003228 master-0 kubenswrapper[27819]: I0319 09:52:42.000750 27819 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2a19f02-b040-4ac7-ba5e-40aab5169420-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:42.288114 master-0 kubenswrapper[27819]: I0319 09:52:42.287933 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58968b8785-ggchc" event={"ID":"a2a19f02-b040-4ac7-ba5e-40aab5169420","Type":"ContainerDied","Data":"5c232ec83781016a7c11ce136bcdbfa7377350d16165013559daba4cc0d5ce29"} Mar 19 09:52:42.288114 master-0 kubenswrapper[27819]: I0319 09:52:42.288017 27819 scope.go:117] "RemoveContainer" containerID="e04af2a5f34d973f3b08b43c387cb86c88f8e64559d556dd858ae12d70c87536" Mar 19 09:52:42.288733 master-0 kubenswrapper[27819]: I0319 09:52:42.287968 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58968b8785-ggchc" Mar 19 09:52:42.311650 master-0 kubenswrapper[27819]: I0319 09:52:42.311466 27819 generic.go:334] "Generic (PLEG): container finished" podID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerID="0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e" exitCode=143 Mar 19 09:52:42.311977 master-0 kubenswrapper[27819]: I0319 09:52:42.311938 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed41734-ebfb-4bf9-836e-43e82b05e510","Type":"ContainerDied","Data":"0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e"} Mar 19 09:52:42.341064 master-0 kubenswrapper[27819]: I0319 09:52:42.341017 27819 scope.go:117] "RemoveContainer" containerID="ffd146b84924912685c7d1573a1e482c9ec50803ba50f5fe3e244e9dffaf32d5" Mar 19 09:52:42.347162 master-0 kubenswrapper[27819]: I0319 09:52:42.347099 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58968b8785-ggchc"] Mar 19 09:52:42.367134 master-0 kubenswrapper[27819]: I0319 09:52:42.367065 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58968b8785-ggchc"] Mar 19 09:52:43.294458 master-0 kubenswrapper[27819]: I0319 09:52:43.294399 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2a19f02-b040-4ac7-ba5e-40aab5169420" path="/var/lib/kubelet/pods/a2a19f02-b040-4ac7-ba5e-40aab5169420/volumes" Mar 19 09:52:43.327323 master-0 kubenswrapper[27819]: I0319 09:52:43.327239 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-log" containerID="cri-o://70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c" gracePeriod=30 Mar 19 09:52:43.328057 master-0 kubenswrapper[27819]: I0319 09:52:43.327340 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-api" containerID="cri-o://58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222" gracePeriod=30 Mar 19 09:52:43.502062 master-0 kubenswrapper[27819]: E0319 09:52:43.501942 27819 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 09:52:43.504025 master-0 kubenswrapper[27819]: E0319 09:52:43.503960 27819 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 09:52:43.508453 master-0 kubenswrapper[27819]: E0319 09:52:43.508408 27819 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 19 09:52:43.508571 master-0 kubenswrapper[27819]: E0319 09:52:43.508456 27819 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8cb2ca57-2118-4862-a72c-3cb12baf7972" containerName="nova-scheduler-scheduler" Mar 19 09:52:44.024350 master-0 kubenswrapper[27819]: I0319 09:52:44.024280 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:44.150198 master-0 kubenswrapper[27819]: I0319 09:52:44.150134 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mks48\" (UniqueName: \"kubernetes.io/projected/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-kube-api-access-mks48\") pod \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " Mar 19 09:52:44.150432 master-0 kubenswrapper[27819]: I0319 09:52:44.150322 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-combined-ca-bundle\") pod \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " Mar 19 09:52:44.150432 master-0 kubenswrapper[27819]: I0319 09:52:44.150360 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-internal-tls-certs\") pod \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " Mar 19 09:52:44.150432 master-0 kubenswrapper[27819]: I0319 09:52:44.150395 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-config-data\") pod \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " Mar 19 09:52:44.150740 master-0 kubenswrapper[27819]: I0319 09:52:44.150494 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-logs\") pod \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " Mar 19 09:52:44.150740 master-0 kubenswrapper[27819]: I0319 09:52:44.150530 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-public-tls-certs\") pod \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\" (UID: \"bb4bc283-1922-4f11-b9f7-d875d8f2e99b\") " Mar 19 09:52:44.151263 master-0 kubenswrapper[27819]: I0319 09:52:44.151208 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-logs" (OuterVolumeSpecName: "logs") pod "bb4bc283-1922-4f11-b9f7-d875d8f2e99b" (UID: "bb4bc283-1922-4f11-b9f7-d875d8f2e99b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:52:44.154372 master-0 kubenswrapper[27819]: I0319 09:52:44.154276 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-kube-api-access-mks48" (OuterVolumeSpecName: "kube-api-access-mks48") pod "bb4bc283-1922-4f11-b9f7-d875d8f2e99b" (UID: "bb4bc283-1922-4f11-b9f7-d875d8f2e99b"). InnerVolumeSpecName "kube-api-access-mks48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:44.177070 master-0 kubenswrapper[27819]: I0319 09:52:44.177006 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-config-data" (OuterVolumeSpecName: "config-data") pod "bb4bc283-1922-4f11-b9f7-d875d8f2e99b" (UID: "bb4bc283-1922-4f11-b9f7-d875d8f2e99b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:44.179984 master-0 kubenswrapper[27819]: I0319 09:52:44.179952 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb4bc283-1922-4f11-b9f7-d875d8f2e99b" (UID: "bb4bc283-1922-4f11-b9f7-d875d8f2e99b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:44.203304 master-0 kubenswrapper[27819]: I0319 09:52:44.203242 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb4bc283-1922-4f11-b9f7-d875d8f2e99b" (UID: "bb4bc283-1922-4f11-b9f7-d875d8f2e99b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:44.203814 master-0 kubenswrapper[27819]: I0319 09:52:44.203778 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb4bc283-1922-4f11-b9f7-d875d8f2e99b" (UID: "bb4bc283-1922-4f11-b9f7-d875d8f2e99b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:44.253285 master-0 kubenswrapper[27819]: I0319 09:52:44.253211 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:44.253285 master-0 kubenswrapper[27819]: I0319 09:52:44.253265 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:44.253285 master-0 kubenswrapper[27819]: I0319 09:52:44.253276 27819 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:44.253285 master-0 kubenswrapper[27819]: I0319 09:52:44.253288 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mks48\" (UniqueName: \"kubernetes.io/projected/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-kube-api-access-mks48\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:44.253285 master-0 kubenswrapper[27819]: I0319 09:52:44.253302 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:44.253653 master-0 kubenswrapper[27819]: I0319 09:52:44.253311 27819 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb4bc283-1922-4f11-b9f7-d875d8f2e99b-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:44.341158 master-0 kubenswrapper[27819]: I0319 09:52:44.341100 27819 generic.go:334] "Generic (PLEG): container finished" podID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerID="58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222" exitCode=0 Mar 19 09:52:44.341158 master-0 kubenswrapper[27819]: I0319 09:52:44.341139 27819 generic.go:334] "Generic (PLEG): container finished" podID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerID="70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c" exitCode=143 Mar 19 09:52:44.341688 master-0 kubenswrapper[27819]: I0319 09:52:44.341162 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:44.341773 master-0 kubenswrapper[27819]: I0319 09:52:44.341163 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb4bc283-1922-4f11-b9f7-d875d8f2e99b","Type":"ContainerDied","Data":"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222"} Mar 19 09:52:44.341885 master-0 kubenswrapper[27819]: I0319 09:52:44.341858 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb4bc283-1922-4f11-b9f7-d875d8f2e99b","Type":"ContainerDied","Data":"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c"} Mar 19 09:52:44.341958 master-0 kubenswrapper[27819]: I0319 09:52:44.341945 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bb4bc283-1922-4f11-b9f7-d875d8f2e99b","Type":"ContainerDied","Data":"9689dd1076b1f67b36d31d06af3983d59f94d83adaeeeb95e38efed4544ff714"} Mar 19 09:52:44.342033 master-0 kubenswrapper[27819]: I0319 09:52:44.342020 27819 scope.go:117] "RemoveContainer" containerID="58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222" Mar 19 09:52:44.366490 master-0 kubenswrapper[27819]: I0319 09:52:44.366451 27819 scope.go:117] "RemoveContainer" containerID="70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c" Mar 19 09:52:44.387932 master-0 kubenswrapper[27819]: I0319 09:52:44.387879 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:44.414441 master-0 kubenswrapper[27819]: I0319 09:52:44.414313 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:44.423422 master-0 kubenswrapper[27819]: I0319 09:52:44.423388 27819 scope.go:117] "RemoveContainer" containerID="58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222" Mar 19 09:52:44.431242 master-0 kubenswrapper[27819]: E0319 09:52:44.431157 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222\": container with ID starting with 58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222 not found: ID does not exist" containerID="58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222" Mar 19 09:52:44.431498 master-0 kubenswrapper[27819]: I0319 09:52:44.431251 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222"} err="failed to get container status \"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222\": rpc error: code = NotFound desc = could not find container \"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222\": container with ID starting with 58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222 not found: ID does not exist" Mar 19 09:52:44.431498 master-0 kubenswrapper[27819]: I0319 09:52:44.431307 27819 scope.go:117] "RemoveContainer" containerID="70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c" Mar 19 09:52:44.433394 master-0 kubenswrapper[27819]: I0319 09:52:44.433361 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:44.433722 master-0 kubenswrapper[27819]: E0319 09:52:44.433659 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c\": container with ID starting with 70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c not found: ID does not exist" containerID="70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c" Mar 19 09:52:44.433885 master-0 kubenswrapper[27819]: I0319 09:52:44.433841 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c"} err="failed to get container status \"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c\": rpc error: code = NotFound desc = could not find container \"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c\": container with ID starting with 70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c not found: ID does not exist" Mar 19 09:52:44.433995 master-0 kubenswrapper[27819]: I0319 09:52:44.433978 27819 scope.go:117] "RemoveContainer" containerID="58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222" Mar 19 09:52:44.434188 master-0 kubenswrapper[27819]: E0319 09:52:44.434013 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-log" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: I0319 09:52:44.434301 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-log" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: E0319 09:52:44.434374 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da9159b-7125-4a77-ae29-410332c3be7c" containerName="nova-manage" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: I0319 09:52:44.434383 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da9159b-7125-4a77-ae29-410332c3be7c" containerName="nova-manage" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: E0319 09:52:44.434414 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-api" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: I0319 09:52:44.434422 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-api" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: E0319 09:52:44.434444 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerName="init" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: I0319 09:52:44.434451 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerName="init" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: E0319 09:52:44.434471 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerName="dnsmasq-dns" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: I0319 09:52:44.434479 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerName="dnsmasq-dns" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: I0319 09:52:44.434474 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222"} err="failed to get container status \"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222\": rpc error: code = NotFound desc = could not find container \"58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222\": container with ID starting with 58fb87a3a2e7a8f4ed90077935e65f5383149c7973948df29767b4f8ac21b222 not found: ID does not exist" Mar 19 09:52:44.434713 master-0 kubenswrapper[27819]: I0319 09:52:44.434534 27819 scope.go:117] "RemoveContainer" containerID="70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c" Mar 19 09:52:44.435506 master-0 kubenswrapper[27819]: I0319 09:52:44.434849 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-api" Mar 19 09:52:44.435506 master-0 kubenswrapper[27819]: I0319 09:52:44.434890 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2a19f02-b040-4ac7-ba5e-40aab5169420" containerName="dnsmasq-dns" Mar 19 09:52:44.435506 master-0 kubenswrapper[27819]: I0319 09:52:44.434926 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" containerName="nova-api-log" Mar 19 09:52:44.435506 master-0 kubenswrapper[27819]: I0319 09:52:44.434938 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da9159b-7125-4a77-ae29-410332c3be7c" containerName="nova-manage" Mar 19 09:52:44.436420 master-0 kubenswrapper[27819]: I0319 09:52:44.436380 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:44.437169 master-0 kubenswrapper[27819]: I0319 09:52:44.437010 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c"} err="failed to get container status \"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c\": rpc error: code = NotFound desc = could not find container \"70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c\": container with ID starting with 70bd9d4b90f0ec4063b5d52cb9cccae8b91f49ed59c8d9f103e43dadf2f9459c not found: ID does not exist" Mar 19 09:52:44.439451 master-0 kubenswrapper[27819]: I0319 09:52:44.439410 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 19 09:52:44.442777 master-0 kubenswrapper[27819]: I0319 09:52:44.439979 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 19 09:52:44.442777 master-0 kubenswrapper[27819]: I0319 09:52:44.440063 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 19 09:52:44.451414 master-0 kubenswrapper[27819]: I0319 09:52:44.451350 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:44.561763 master-0 kubenswrapper[27819]: I0319 09:52:44.561686 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s5hk\" (UniqueName: \"kubernetes.io/projected/93404e12-9dba-4478-acdf-b9073399ad93-kube-api-access-2s5hk\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.561763 master-0 kubenswrapper[27819]: I0319 09:52:44.561747 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-public-tls-certs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.561995 master-0 kubenswrapper[27819]: I0319 09:52:44.561864 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93404e12-9dba-4478-acdf-b9073399ad93-logs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.561995 master-0 kubenswrapper[27819]: I0319 09:52:44.561939 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.562078 master-0 kubenswrapper[27819]: I0319 09:52:44.562048 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.562159 master-0 kubenswrapper[27819]: I0319 09:52:44.562133 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-config-data\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.665827 master-0 kubenswrapper[27819]: I0319 09:52:44.665027 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-config-data\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.665827 master-0 kubenswrapper[27819]: I0319 09:52:44.665251 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s5hk\" (UniqueName: \"kubernetes.io/projected/93404e12-9dba-4478-acdf-b9073399ad93-kube-api-access-2s5hk\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.665827 master-0 kubenswrapper[27819]: I0319 09:52:44.665316 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-public-tls-certs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.665827 master-0 kubenswrapper[27819]: I0319 09:52:44.665433 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93404e12-9dba-4478-acdf-b9073399ad93-logs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.665827 master-0 kubenswrapper[27819]: I0319 09:52:44.665485 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.665827 master-0 kubenswrapper[27819]: I0319 09:52:44.665658 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.669123 master-0 kubenswrapper[27819]: I0319 09:52:44.669067 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93404e12-9dba-4478-acdf-b9073399ad93-logs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.670482 master-0 kubenswrapper[27819]: I0319 09:52:44.670441 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-config-data\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.671097 master-0 kubenswrapper[27819]: I0319 09:52:44.671018 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-public-tls-certs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.671562 master-0 kubenswrapper[27819]: I0319 09:52:44.671487 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.672040 master-0 kubenswrapper[27819]: I0319 09:52:44.671982 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93404e12-9dba-4478-acdf-b9073399ad93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.709287 master-0 kubenswrapper[27819]: I0319 09:52:44.709234 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s5hk\" (UniqueName: \"kubernetes.io/projected/93404e12-9dba-4478-acdf-b9073399ad93-kube-api-access-2s5hk\") pod \"nova-api-0\" (UID: \"93404e12-9dba-4478-acdf-b9073399ad93\") " pod="openstack/nova-api-0" Mar 19 09:52:44.767039 master-0 kubenswrapper[27819]: I0319 09:52:44.766985 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 19 09:52:45.205883 master-0 kubenswrapper[27819]: I0319 09:52:45.205826 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:45.280778 master-0 kubenswrapper[27819]: I0319 09:52:45.280661 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed41734-ebfb-4bf9-836e-43e82b05e510-logs\") pod \"4ed41734-ebfb-4bf9-836e-43e82b05e510\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " Mar 19 09:52:45.280778 master-0 kubenswrapper[27819]: I0319 09:52:45.280764 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-combined-ca-bundle\") pod \"4ed41734-ebfb-4bf9-836e-43e82b05e510\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " Mar 19 09:52:45.281026 master-0 kubenswrapper[27819]: I0319 09:52:45.280847 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-nova-metadata-tls-certs\") pod \"4ed41734-ebfb-4bf9-836e-43e82b05e510\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " Mar 19 09:52:45.281026 master-0 kubenswrapper[27819]: I0319 09:52:45.280884 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjd79\" (UniqueName: \"kubernetes.io/projected/4ed41734-ebfb-4bf9-836e-43e82b05e510-kube-api-access-hjd79\") pod \"4ed41734-ebfb-4bf9-836e-43e82b05e510\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " Mar 19 09:52:45.281026 master-0 kubenswrapper[27819]: I0319 09:52:45.280915 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-config-data\") pod \"4ed41734-ebfb-4bf9-836e-43e82b05e510\" (UID: \"4ed41734-ebfb-4bf9-836e-43e82b05e510\") " Mar 19 09:52:45.282076 master-0 kubenswrapper[27819]: I0319 09:52:45.282044 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ed41734-ebfb-4bf9-836e-43e82b05e510-logs" (OuterVolumeSpecName: "logs") pod "4ed41734-ebfb-4bf9-836e-43e82b05e510" (UID: "4ed41734-ebfb-4bf9-836e-43e82b05e510"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 19 09:52:45.292225 master-0 kubenswrapper[27819]: I0319 09:52:45.292172 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed41734-ebfb-4bf9-836e-43e82b05e510-kube-api-access-hjd79" (OuterVolumeSpecName: "kube-api-access-hjd79") pod "4ed41734-ebfb-4bf9-836e-43e82b05e510" (UID: "4ed41734-ebfb-4bf9-836e-43e82b05e510"). InnerVolumeSpecName "kube-api-access-hjd79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:45.304555 master-0 kubenswrapper[27819]: I0319 09:52:45.304434 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4bc283-1922-4f11-b9f7-d875d8f2e99b" path="/var/lib/kubelet/pods/bb4bc283-1922-4f11-b9f7-d875d8f2e99b/volumes" Mar 19 09:52:45.331812 master-0 kubenswrapper[27819]: I0319 09:52:45.331740 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-config-data" (OuterVolumeSpecName: "config-data") pod "4ed41734-ebfb-4bf9-836e-43e82b05e510" (UID: "4ed41734-ebfb-4bf9-836e-43e82b05e510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:45.332101 master-0 kubenswrapper[27819]: I0319 09:52:45.331891 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ed41734-ebfb-4bf9-836e-43e82b05e510" (UID: "4ed41734-ebfb-4bf9-836e-43e82b05e510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:45.365882 master-0 kubenswrapper[27819]: I0319 09:52:45.361026 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 19 09:52:45.365882 master-0 kubenswrapper[27819]: W0319 09:52:45.362211 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93404e12_9dba_4478_acdf_b9073399ad93.slice/crio-d84d216a4da68784767968c8d626022a6e7e59c60f8b9661565665de5c449a65 WatchSource:0}: Error finding container d84d216a4da68784767968c8d626022a6e7e59c60f8b9661565665de5c449a65: Status 404 returned error can't find the container with id d84d216a4da68784767968c8d626022a6e7e59c60f8b9661565665de5c449a65 Mar 19 09:52:45.385684 master-0 kubenswrapper[27819]: I0319 09:52:45.385606 27819 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed41734-ebfb-4bf9-836e-43e82b05e510-logs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:45.385684 master-0 kubenswrapper[27819]: I0319 09:52:45.385651 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:45.385684 master-0 kubenswrapper[27819]: I0319 09:52:45.385679 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjd79\" (UniqueName: \"kubernetes.io/projected/4ed41734-ebfb-4bf9-836e-43e82b05e510-kube-api-access-hjd79\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:45.385684 master-0 kubenswrapper[27819]: I0319 09:52:45.385693 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:45.391830 master-0 kubenswrapper[27819]: I0319 09:52:45.391752 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4ed41734-ebfb-4bf9-836e-43e82b05e510" (UID: "4ed41734-ebfb-4bf9-836e-43e82b05e510"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:45.392141 master-0 kubenswrapper[27819]: I0319 09:52:45.391883 27819 generic.go:334] "Generic (PLEG): container finished" podID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerID="adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c" exitCode=0 Mar 19 09:52:45.392141 master-0 kubenswrapper[27819]: I0319 09:52:45.392013 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed41734-ebfb-4bf9-836e-43e82b05e510","Type":"ContainerDied","Data":"adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c"} Mar 19 09:52:45.392141 master-0 kubenswrapper[27819]: I0319 09:52:45.392050 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:45.392141 master-0 kubenswrapper[27819]: I0319 09:52:45.392076 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4ed41734-ebfb-4bf9-836e-43e82b05e510","Type":"ContainerDied","Data":"45a71a35f8b31a5447be32927c80a9944dacc72dc17bd222f627015071d538a5"} Mar 19 09:52:45.392141 master-0 kubenswrapper[27819]: I0319 09:52:45.392100 27819 scope.go:117] "RemoveContainer" containerID="adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c" Mar 19 09:52:45.425209 master-0 kubenswrapper[27819]: I0319 09:52:45.425164 27819 scope.go:117] "RemoveContainer" containerID="0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e" Mar 19 09:52:45.453437 master-0 kubenswrapper[27819]: I0319 09:52:45.453373 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:45.476845 master-0 kubenswrapper[27819]: I0319 09:52:45.476787 27819 scope.go:117] "RemoveContainer" containerID="adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c" Mar 19 09:52:45.477311 master-0 kubenswrapper[27819]: E0319 09:52:45.477285 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c\": container with ID starting with adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c not found: ID does not exist" containerID="adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c" Mar 19 09:52:45.477377 master-0 kubenswrapper[27819]: I0319 09:52:45.477320 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c"} err="failed to get container status \"adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c\": rpc error: code = NotFound desc = could not find container \"adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c\": container with ID starting with adf3b50b01195c1e3858a67e38b039ab337eb960ec6155c0e50eb6002f7d628c not found: ID does not exist" Mar 19 09:52:45.477377 master-0 kubenswrapper[27819]: I0319 09:52:45.477343 27819 scope.go:117] "RemoveContainer" containerID="0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e" Mar 19 09:52:45.478596 master-0 kubenswrapper[27819]: E0319 09:52:45.477890 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e\": container with ID starting with 0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e not found: ID does not exist" containerID="0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e" Mar 19 09:52:45.478596 master-0 kubenswrapper[27819]: I0319 09:52:45.477917 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e"} err="failed to get container status \"0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e\": rpc error: code = NotFound desc = could not find container \"0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e\": container with ID starting with 0fe2a8dbbc8c539ef4f7c7c11926302eaf328478eeab158fed079fa02715870e not found: ID does not exist" Mar 19 09:52:45.478596 master-0 kubenswrapper[27819]: I0319 09:52:45.478259 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:45.487738 master-0 kubenswrapper[27819]: I0319 09:52:45.487669 27819 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ed41734-ebfb-4bf9-836e-43e82b05e510-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:45.491329 master-0 kubenswrapper[27819]: I0319 09:52:45.491253 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:45.492596 master-0 kubenswrapper[27819]: E0319 09:52:45.491854 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-metadata" Mar 19 09:52:45.492596 master-0 kubenswrapper[27819]: I0319 09:52:45.492083 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-metadata" Mar 19 09:52:45.492596 master-0 kubenswrapper[27819]: E0319 09:52:45.492125 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-log" Mar 19 09:52:45.492596 master-0 kubenswrapper[27819]: I0319 09:52:45.492133 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-log" Mar 19 09:52:45.492596 master-0 kubenswrapper[27819]: I0319 09:52:45.492507 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-metadata" Mar 19 09:52:45.492596 master-0 kubenswrapper[27819]: I0319 09:52:45.492569 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" containerName="nova-metadata-log" Mar 19 09:52:45.494290 master-0 kubenswrapper[27819]: I0319 09:52:45.494253 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:45.496267 master-0 kubenswrapper[27819]: I0319 09:52:45.496233 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 19 09:52:45.496821 master-0 kubenswrapper[27819]: I0319 09:52:45.496800 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 19 09:52:45.573731 master-0 kubenswrapper[27819]: I0319 09:52:45.573666 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:45.590146 master-0 kubenswrapper[27819]: I0319 09:52:45.590100 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-config-data\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.590268 master-0 kubenswrapper[27819]: I0319 09:52:45.590240 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b95f9bf-7948-4742-896c-0402bbd7a943-logs\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.590360 master-0 kubenswrapper[27819]: I0319 09:52:45.590339 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.590405 master-0 kubenswrapper[27819]: I0319 09:52:45.590374 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.590445 master-0 kubenswrapper[27819]: I0319 09:52:45.590407 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7c9\" (UniqueName: \"kubernetes.io/projected/0b95f9bf-7948-4742-896c-0402bbd7a943-kube-api-access-pt7c9\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.694774 master-0 kubenswrapper[27819]: I0319 09:52:45.694626 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-config-data\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.695115 master-0 kubenswrapper[27819]: I0319 09:52:45.695084 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b95f9bf-7948-4742-896c-0402bbd7a943-logs\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.695679 master-0 kubenswrapper[27819]: I0319 09:52:45.695625 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b95f9bf-7948-4742-896c-0402bbd7a943-logs\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.696105 master-0 kubenswrapper[27819]: I0319 09:52:45.696065 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.697842 master-0 kubenswrapper[27819]: I0319 09:52:45.697716 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.697842 master-0 kubenswrapper[27819]: I0319 09:52:45.697820 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7c9\" (UniqueName: \"kubernetes.io/projected/0b95f9bf-7948-4742-896c-0402bbd7a943-kube-api-access-pt7c9\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.698604 master-0 kubenswrapper[27819]: I0319 09:52:45.697732 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-config-data\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.700052 master-0 kubenswrapper[27819]: I0319 09:52:45.700016 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.701867 master-0 kubenswrapper[27819]: I0319 09:52:45.700852 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b95f9bf-7948-4742-896c-0402bbd7a943-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.713666 master-0 kubenswrapper[27819]: I0319 09:52:45.712821 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7c9\" (UniqueName: \"kubernetes.io/projected/0b95f9bf-7948-4742-896c-0402bbd7a943-kube-api-access-pt7c9\") pod \"nova-metadata-0\" (UID: \"0b95f9bf-7948-4742-896c-0402bbd7a943\") " pod="openstack/nova-metadata-0" Mar 19 09:52:45.907082 master-0 kubenswrapper[27819]: I0319 09:52:45.906826 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 19 09:52:46.419570 master-0 kubenswrapper[27819]: I0319 09:52:46.418160 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93404e12-9dba-4478-acdf-b9073399ad93","Type":"ContainerStarted","Data":"ed62ca9802a37762195ab1c3f7f5e252ada6c74db271eb1a7ffa9c7a97c62456"} Mar 19 09:52:46.419570 master-0 kubenswrapper[27819]: I0319 09:52:46.418233 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93404e12-9dba-4478-acdf-b9073399ad93","Type":"ContainerStarted","Data":"1b4d48ae01fe682c4183e8b2e57013d1d9e4f46bff95fbccb5f5f96865881b27"} Mar 19 09:52:46.419570 master-0 kubenswrapper[27819]: I0319 09:52:46.418246 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93404e12-9dba-4478-acdf-b9073399ad93","Type":"ContainerStarted","Data":"d84d216a4da68784767968c8d626022a6e7e59c60f8b9661565665de5c449a65"} Mar 19 09:52:46.542574 master-0 kubenswrapper[27819]: I0319 09:52:46.542166 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 19 09:52:46.682636 master-0 kubenswrapper[27819]: I0319 09:52:46.682338 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.682320937 podStartE2EDuration="2.682320937s" podCreationTimestamp="2026-03-19 09:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:46.666446566 +0000 UTC m=+1151.588024258" watchObservedRunningTime="2026-03-19 09:52:46.682320937 +0000 UTC m=+1151.603898629" Mar 19 09:52:47.293944 master-0 kubenswrapper[27819]: I0319 09:52:47.293827 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed41734-ebfb-4bf9-836e-43e82b05e510" path="/var/lib/kubelet/pods/4ed41734-ebfb-4bf9-836e-43e82b05e510/volumes" Mar 19 09:52:47.434363 master-0 kubenswrapper[27819]: I0319 09:52:47.434129 27819 generic.go:334] "Generic (PLEG): container finished" podID="8cb2ca57-2118-4862-a72c-3cb12baf7972" containerID="a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05" exitCode=0 Mar 19 09:52:47.434363 master-0 kubenswrapper[27819]: I0319 09:52:47.434231 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cb2ca57-2118-4862-a72c-3cb12baf7972","Type":"ContainerDied","Data":"a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05"} Mar 19 09:52:47.437875 master-0 kubenswrapper[27819]: I0319 09:52:47.437816 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b95f9bf-7948-4742-896c-0402bbd7a943","Type":"ContainerStarted","Data":"f36933e4c8f068279ff1aa33c7877240c9611668254cf03970448c58cfa638a4"} Mar 19 09:52:47.437991 master-0 kubenswrapper[27819]: I0319 09:52:47.437885 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b95f9bf-7948-4742-896c-0402bbd7a943","Type":"ContainerStarted","Data":"875a222bad56a7cf84bf285391bcb95f91cf6809b5be9e11dc6f80346ea22ef1"} Mar 19 09:52:47.437991 master-0 kubenswrapper[27819]: I0319 09:52:47.437898 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0b95f9bf-7948-4742-896c-0402bbd7a943","Type":"ContainerStarted","Data":"63b26b60e7522a66becaf8b68722d7fdbf9a5df50bf73b0dce86aadca0827f6f"} Mar 19 09:52:47.477453 master-0 kubenswrapper[27819]: I0319 09:52:47.475049 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.475025731 podStartE2EDuration="2.475025731s" podCreationTimestamp="2026-03-19 09:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:47.461150892 +0000 UTC m=+1152.382728594" watchObservedRunningTime="2026-03-19 09:52:47.475025731 +0000 UTC m=+1152.396603423" Mar 19 09:52:47.731525 master-0 kubenswrapper[27819]: I0319 09:52:47.731476 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:47.880249 master-0 kubenswrapper[27819]: I0319 09:52:47.880184 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxbvk\" (UniqueName: \"kubernetes.io/projected/8cb2ca57-2118-4862-a72c-3cb12baf7972-kube-api-access-kxbvk\") pod \"8cb2ca57-2118-4862-a72c-3cb12baf7972\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " Mar 19 09:52:47.880584 master-0 kubenswrapper[27819]: I0319 09:52:47.880358 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-combined-ca-bundle\") pod \"8cb2ca57-2118-4862-a72c-3cb12baf7972\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " Mar 19 09:52:47.880584 master-0 kubenswrapper[27819]: I0319 09:52:47.880579 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-config-data\") pod \"8cb2ca57-2118-4862-a72c-3cb12baf7972\" (UID: \"8cb2ca57-2118-4862-a72c-3cb12baf7972\") " Mar 19 09:52:47.884089 master-0 kubenswrapper[27819]: I0319 09:52:47.884001 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb2ca57-2118-4862-a72c-3cb12baf7972-kube-api-access-kxbvk" (OuterVolumeSpecName: "kube-api-access-kxbvk") pod "8cb2ca57-2118-4862-a72c-3cb12baf7972" (UID: "8cb2ca57-2118-4862-a72c-3cb12baf7972"). InnerVolumeSpecName "kube-api-access-kxbvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:52:47.913468 master-0 kubenswrapper[27819]: I0319 09:52:47.912670 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-config-data" (OuterVolumeSpecName: "config-data") pod "8cb2ca57-2118-4862-a72c-3cb12baf7972" (UID: "8cb2ca57-2118-4862-a72c-3cb12baf7972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:47.921605 master-0 kubenswrapper[27819]: I0319 09:52:47.921535 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cb2ca57-2118-4862-a72c-3cb12baf7972" (UID: "8cb2ca57-2118-4862-a72c-3cb12baf7972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:52:47.984363 master-0 kubenswrapper[27819]: I0319 09:52:47.984204 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxbvk\" (UniqueName: \"kubernetes.io/projected/8cb2ca57-2118-4862-a72c-3cb12baf7972-kube-api-access-kxbvk\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:47.984363 master-0 kubenswrapper[27819]: I0319 09:52:47.984242 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:47.984363 master-0 kubenswrapper[27819]: I0319 09:52:47.984254 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb2ca57-2118-4862-a72c-3cb12baf7972-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 09:52:48.450711 master-0 kubenswrapper[27819]: I0319 09:52:48.450645 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:48.451513 master-0 kubenswrapper[27819]: I0319 09:52:48.451456 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8cb2ca57-2118-4862-a72c-3cb12baf7972","Type":"ContainerDied","Data":"0d26724e00f6b2d3150a98b6c6b094dbb7b4831cc59fbdf9037c7a099ad782dc"} Mar 19 09:52:48.451637 master-0 kubenswrapper[27819]: I0319 09:52:48.451527 27819 scope.go:117] "RemoveContainer" containerID="a6c83358342757a759545a4ec4c1d5b04e9f5b3220ae546540e67e6b12c8bc05" Mar 19 09:52:48.501518 master-0 kubenswrapper[27819]: I0319 09:52:48.501465 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:48.519769 master-0 kubenswrapper[27819]: I0319 09:52:48.518293 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:48.532297 master-0 kubenswrapper[27819]: I0319 09:52:48.532243 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:48.532968 master-0 kubenswrapper[27819]: E0319 09:52:48.532937 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb2ca57-2118-4862-a72c-3cb12baf7972" containerName="nova-scheduler-scheduler" Mar 19 09:52:48.532968 master-0 kubenswrapper[27819]: I0319 09:52:48.532964 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb2ca57-2118-4862-a72c-3cb12baf7972" containerName="nova-scheduler-scheduler" Mar 19 09:52:48.533378 master-0 kubenswrapper[27819]: I0319 09:52:48.533348 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb2ca57-2118-4862-a72c-3cb12baf7972" containerName="nova-scheduler-scheduler" Mar 19 09:52:48.534259 master-0 kubenswrapper[27819]: I0319 09:52:48.534233 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:48.536756 master-0 kubenswrapper[27819]: I0319 09:52:48.536707 27819 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 19 09:52:48.550178 master-0 kubenswrapper[27819]: I0319 09:52:48.550113 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:48.599316 master-0 kubenswrapper[27819]: I0319 09:52:48.599243 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5mk\" (UniqueName: \"kubernetes.io/projected/5366e139-b2ac-4c39-9253-93cfddea22c7-kube-api-access-mn5mk\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.599599 master-0 kubenswrapper[27819]: I0319 09:52:48.599403 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5366e139-b2ac-4c39-9253-93cfddea22c7-config-data\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.599650 master-0 kubenswrapper[27819]: I0319 09:52:48.599613 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5366e139-b2ac-4c39-9253-93cfddea22c7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.701709 master-0 kubenswrapper[27819]: I0319 09:52:48.701598 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5366e139-b2ac-4c39-9253-93cfddea22c7-config-data\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.701979 master-0 kubenswrapper[27819]: I0319 09:52:48.701853 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5366e139-b2ac-4c39-9253-93cfddea22c7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.701979 master-0 kubenswrapper[27819]: I0319 09:52:48.701934 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn5mk\" (UniqueName: \"kubernetes.io/projected/5366e139-b2ac-4c39-9253-93cfddea22c7-kube-api-access-mn5mk\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.705435 master-0 kubenswrapper[27819]: I0319 09:52:48.705385 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5366e139-b2ac-4c39-9253-93cfddea22c7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.705679 master-0 kubenswrapper[27819]: I0319 09:52:48.705634 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5366e139-b2ac-4c39-9253-93cfddea22c7-config-data\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.719023 master-0 kubenswrapper[27819]: I0319 09:52:48.718913 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn5mk\" (UniqueName: \"kubernetes.io/projected/5366e139-b2ac-4c39-9253-93cfddea22c7-kube-api-access-mn5mk\") pod \"nova-scheduler-0\" (UID: \"5366e139-b2ac-4c39-9253-93cfddea22c7\") " pod="openstack/nova-scheduler-0" Mar 19 09:52:48.876369 master-0 kubenswrapper[27819]: I0319 09:52:48.876306 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 19 09:52:49.293588 master-0 kubenswrapper[27819]: I0319 09:52:49.293439 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb2ca57-2118-4862-a72c-3cb12baf7972" path="/var/lib/kubelet/pods/8cb2ca57-2118-4862-a72c-3cb12baf7972/volumes" Mar 19 09:52:49.364912 master-0 kubenswrapper[27819]: I0319 09:52:49.364742 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 19 09:52:49.462301 master-0 kubenswrapper[27819]: I0319 09:52:49.462227 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5366e139-b2ac-4c39-9253-93cfddea22c7","Type":"ContainerStarted","Data":"ffa10ac15fc053e1027f095a9d1c476567f8d4ece86032c579a6a7993816182e"} Mar 19 09:52:50.476603 master-0 kubenswrapper[27819]: I0319 09:52:50.476514 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5366e139-b2ac-4c39-9253-93cfddea22c7","Type":"ContainerStarted","Data":"a67b2d060e07f0b51b044f4d76707b14003967c717e7e31ec0b631c57ae3fd43"} Mar 19 09:52:53.876509 master-0 kubenswrapper[27819]: I0319 09:52:53.876456 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 19 09:52:54.767705 master-0 kubenswrapper[27819]: I0319 09:52:54.767630 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 09:52:54.767705 master-0 kubenswrapper[27819]: I0319 09:52:54.767694 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 19 09:52:55.783927 master-0 kubenswrapper[27819]: I0319 09:52:55.783814 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="93404e12-9dba-4478-acdf-b9073399ad93" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:55.784674 master-0 kubenswrapper[27819]: I0319 09:52:55.783830 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="93404e12-9dba-4478-acdf-b9073399ad93" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:55.908142 master-0 kubenswrapper[27819]: I0319 09:52:55.908073 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 09:52:55.908142 master-0 kubenswrapper[27819]: I0319 09:52:55.908136 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 19 09:52:56.917976 master-0 kubenswrapper[27819]: I0319 09:52:56.917906 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b95f9bf-7948-4742-896c-0402bbd7a943" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:56.925001 master-0 kubenswrapper[27819]: I0319 09:52:56.924950 27819 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0b95f9bf-7948-4742-896c-0402bbd7a943" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.12:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:52:58.876728 master-0 kubenswrapper[27819]: I0319 09:52:58.876644 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 19 09:52:58.904425 master-0 kubenswrapper[27819]: I0319 09:52:58.904382 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 19 09:52:58.935107 master-0 kubenswrapper[27819]: I0319 09:52:58.935010 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=10.934986762 podStartE2EDuration="10.934986762s" podCreationTimestamp="2026-03-19 09:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:52:50.497862965 +0000 UTC m=+1155.419440657" watchObservedRunningTime="2026-03-19 09:52:58.934986762 +0000 UTC m=+1163.856564464" Mar 19 09:52:59.600965 master-0 kubenswrapper[27819]: I0319 09:52:59.600890 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 19 09:53:02.767352 master-0 kubenswrapper[27819]: I0319 09:53:02.767268 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 09:53:02.768026 master-0 kubenswrapper[27819]: I0319 09:53:02.767425 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 19 09:53:03.907899 master-0 kubenswrapper[27819]: I0319 09:53:03.907847 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 09:53:03.907899 master-0 kubenswrapper[27819]: I0319 09:53:03.907901 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 19 09:53:04.773565 master-0 kubenswrapper[27819]: I0319 09:53:04.773494 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 09:53:04.774627 master-0 kubenswrapper[27819]: I0319 09:53:04.774593 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 19 09:53:04.782228 master-0 kubenswrapper[27819]: I0319 09:53:04.782162 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 09:53:05.644276 master-0 kubenswrapper[27819]: I0319 09:53:05.644224 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 19 09:53:05.918827 master-0 kubenswrapper[27819]: I0319 09:53:05.918692 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 09:53:05.926621 master-0 kubenswrapper[27819]: I0319 09:53:05.926588 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 19 09:53:05.931854 master-0 kubenswrapper[27819]: I0319 09:53:05.931797 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 09:53:06.654311 master-0 kubenswrapper[27819]: I0319 09:53:06.654258 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 19 09:53:32.636775 master-0 kubenswrapper[27819]: I0319 09:53:32.636565 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-l6zr5"] Mar 19 09:53:32.637591 master-0 kubenswrapper[27819]: I0319 09:53:32.636917 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" podUID="c07acf21-e79c-485e-a041-44c2aa7dbecc" containerName="sushy-emulator" containerID="cri-o://de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b" gracePeriod=30 Mar 19 09:53:33.101221 master-0 kubenswrapper[27819]: E0319 09:53:33.101156 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc07acf21_e79c_485e_a041_44c2aa7dbecc.slice/crio-de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:53:33.388903 master-0 kubenswrapper[27819]: I0319 09:53:33.388756 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:53:33.500600 master-0 kubenswrapper[27819]: I0319 09:53:33.497435 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75ncz\" (UniqueName: \"kubernetes.io/projected/c07acf21-e79c-485e-a041-44c2aa7dbecc-kube-api-access-75ncz\") pod \"c07acf21-e79c-485e-a041-44c2aa7dbecc\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " Mar 19 09:53:33.500600 master-0 kubenswrapper[27819]: I0319 09:53:33.497779 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c07acf21-e79c-485e-a041-44c2aa7dbecc-os-client-config\") pod \"c07acf21-e79c-485e-a041-44c2aa7dbecc\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " Mar 19 09:53:33.500600 master-0 kubenswrapper[27819]: I0319 09:53:33.497919 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c07acf21-e79c-485e-a041-44c2aa7dbecc-sushy-emulator-config\") pod \"c07acf21-e79c-485e-a041-44c2aa7dbecc\" (UID: \"c07acf21-e79c-485e-a041-44c2aa7dbecc\") " Mar 19 09:53:33.500600 master-0 kubenswrapper[27819]: I0319 09:53:33.499231 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07acf21-e79c-485e-a041-44c2aa7dbecc-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "c07acf21-e79c-485e-a041-44c2aa7dbecc" (UID: "c07acf21-e79c-485e-a041-44c2aa7dbecc"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 19 09:53:33.501563 master-0 kubenswrapper[27819]: I0319 09:53:33.501152 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg"] Mar 19 09:53:33.505600 master-0 kubenswrapper[27819]: E0319 09:53:33.501758 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07acf21-e79c-485e-a041-44c2aa7dbecc" containerName="sushy-emulator" Mar 19 09:53:33.505600 master-0 kubenswrapper[27819]: I0319 09:53:33.501778 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07acf21-e79c-485e-a041-44c2aa7dbecc" containerName="sushy-emulator" Mar 19 09:53:33.505600 master-0 kubenswrapper[27819]: I0319 09:53:33.502024 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07acf21-e79c-485e-a041-44c2aa7dbecc" containerName="sushy-emulator" Mar 19 09:53:33.505600 master-0 kubenswrapper[27819]: I0319 09:53:33.502354 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c07acf21-e79c-485e-a041-44c2aa7dbecc-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "c07acf21-e79c-485e-a041-44c2aa7dbecc" (UID: "c07acf21-e79c-485e-a041-44c2aa7dbecc"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 09:53:33.505600 master-0 kubenswrapper[27819]: I0319 09:53:33.503001 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.511223 master-0 kubenswrapper[27819]: I0319 09:53:33.511152 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07acf21-e79c-485e-a041-44c2aa7dbecc-kube-api-access-75ncz" (OuterVolumeSpecName: "kube-api-access-75ncz") pod "c07acf21-e79c-485e-a041-44c2aa7dbecc" (UID: "c07acf21-e79c-485e-a041-44c2aa7dbecc"). InnerVolumeSpecName "kube-api-access-75ncz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 09:53:33.514931 master-0 kubenswrapper[27819]: I0319 09:53:33.514857 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg"] Mar 19 09:53:33.600006 master-0 kubenswrapper[27819]: I0319 09:53:33.599961 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-os-client-config\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.600338 master-0 kubenswrapper[27819]: I0319 09:53:33.600159 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84s6d\" (UniqueName: \"kubernetes.io/projected/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-kube-api-access-84s6d\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.600338 master-0 kubenswrapper[27819]: I0319 09:53:33.600213 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.600338 master-0 kubenswrapper[27819]: I0319 09:53:33.600281 27819 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/c07acf21-e79c-485e-a041-44c2aa7dbecc-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:53:33.600338 master-0 kubenswrapper[27819]: I0319 09:53:33.600295 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75ncz\" (UniqueName: \"kubernetes.io/projected/c07acf21-e79c-485e-a041-44c2aa7dbecc-kube-api-access-75ncz\") on node \"master-0\" DevicePath \"\"" Mar 19 09:53:33.600338 master-0 kubenswrapper[27819]: I0319 09:53:33.600305 27819 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/c07acf21-e79c-485e-a041-44c2aa7dbecc-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 19 09:53:33.702863 master-0 kubenswrapper[27819]: I0319 09:53:33.702747 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84s6d\" (UniqueName: \"kubernetes.io/projected/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-kube-api-access-84s6d\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.703739 master-0 kubenswrapper[27819]: I0319 09:53:33.703006 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.703739 master-0 kubenswrapper[27819]: I0319 09:53:33.703087 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-os-client-config\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.706251 master-0 kubenswrapper[27819]: I0319 09:53:33.706208 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.707861 master-0 kubenswrapper[27819]: I0319 09:53:33.707827 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-os-client-config\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.718709 master-0 kubenswrapper[27819]: I0319 09:53:33.718665 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84s6d\" (UniqueName: \"kubernetes.io/projected/3069ba0f-3947-42e7-82bd-ffaf9578cfb0-kube-api-access-84s6d\") pod \"sushy-emulator-54b65fbdd6-kb9rg\" (UID: \"3069ba0f-3947-42e7-82bd-ffaf9578cfb0\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.927721 master-0 kubenswrapper[27819]: I0319 09:53:33.916851 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:33.972171 master-0 kubenswrapper[27819]: I0319 09:53:33.972124 27819 generic.go:334] "Generic (PLEG): container finished" podID="c07acf21-e79c-485e-a041-44c2aa7dbecc" containerID="de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b" exitCode=0 Mar 19 09:53:33.972400 master-0 kubenswrapper[27819]: I0319 09:53:33.972198 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" Mar 19 09:53:33.972400 master-0 kubenswrapper[27819]: I0319 09:53:33.972185 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" event={"ID":"c07acf21-e79c-485e-a041-44c2aa7dbecc","Type":"ContainerDied","Data":"de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b"} Mar 19 09:53:33.972400 master-0 kubenswrapper[27819]: I0319 09:53:33.972357 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-l6zr5" event={"ID":"c07acf21-e79c-485e-a041-44c2aa7dbecc","Type":"ContainerDied","Data":"0724e6b2b2292c153507f7fc633c278193696d798c464c0e2035e41a2c8e28f4"} Mar 19 09:53:33.972400 master-0 kubenswrapper[27819]: I0319 09:53:33.972393 27819 scope.go:117] "RemoveContainer" containerID="de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b" Mar 19 09:53:34.057006 master-0 kubenswrapper[27819]: I0319 09:53:34.056898 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-l6zr5"] Mar 19 09:53:34.081629 master-0 kubenswrapper[27819]: I0319 09:53:34.081563 27819 scope.go:117] "RemoveContainer" containerID="de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b" Mar 19 09:53:34.082058 master-0 kubenswrapper[27819]: E0319 09:53:34.082021 27819 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b\": container with ID starting with de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b not found: ID does not exist" containerID="de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b" Mar 19 09:53:34.082115 master-0 kubenswrapper[27819]: I0319 09:53:34.082057 27819 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b"} err="failed to get container status \"de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b\": rpc error: code = NotFound desc = could not find container \"de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b\": container with ID starting with de592beb2185ca43de66fec22c3d0c99ca4195c660143f8939ff908d69b10c1b not found: ID does not exist" Mar 19 09:53:34.082173 master-0 kubenswrapper[27819]: I0319 09:53:34.082140 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-l6zr5"] Mar 19 09:53:34.487667 master-0 kubenswrapper[27819]: W0319 09:53:34.486118 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3069ba0f_3947_42e7_82bd_ffaf9578cfb0.slice/crio-3c2630edc6a3386037ec73f6d59b86cceb416b5fdc2d0a089d7fe26c9d67aee9 WatchSource:0}: Error finding container 3c2630edc6a3386037ec73f6d59b86cceb416b5fdc2d0a089d7fe26c9d67aee9: Status 404 returned error can't find the container with id 3c2630edc6a3386037ec73f6d59b86cceb416b5fdc2d0a089d7fe26c9d67aee9 Mar 19 09:53:34.489625 master-0 kubenswrapper[27819]: I0319 09:53:34.488959 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg"] Mar 19 09:53:34.995959 master-0 kubenswrapper[27819]: I0319 09:53:34.995770 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" event={"ID":"3069ba0f-3947-42e7-82bd-ffaf9578cfb0","Type":"ContainerStarted","Data":"3c2630edc6a3386037ec73f6d59b86cceb416b5fdc2d0a089d7fe26c9d67aee9"} Mar 19 09:53:35.300858 master-0 kubenswrapper[27819]: I0319 09:53:35.300486 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07acf21-e79c-485e-a041-44c2aa7dbecc" path="/var/lib/kubelet/pods/c07acf21-e79c-485e-a041-44c2aa7dbecc/volumes" Mar 19 09:53:36.024858 master-0 kubenswrapper[27819]: I0319 09:53:36.024788 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" event={"ID":"3069ba0f-3947-42e7-82bd-ffaf9578cfb0","Type":"ContainerStarted","Data":"d593aa1cb88cc31cd1be80e9ce10c046c0d5a6e444c22d773c88bf741d3d2f6e"} Mar 19 09:53:36.064740 master-0 kubenswrapper[27819]: I0319 09:53:36.064639 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" podStartSLOduration=3.064299809 podStartE2EDuration="3.064299809s" podCreationTimestamp="2026-03-19 09:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-19 09:53:36.053197421 +0000 UTC m=+1200.974775113" watchObservedRunningTime="2026-03-19 09:53:36.064299809 +0000 UTC m=+1200.985877501" Mar 19 09:53:43.917893 master-0 kubenswrapper[27819]: I0319 09:53:43.917843 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:43.918747 master-0 kubenswrapper[27819]: I0319 09:53:43.918714 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:43.929558 master-0 kubenswrapper[27819]: I0319 09:53:43.929486 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:53:44.108263 master-0 kubenswrapper[27819]: I0319 09:53:44.108017 27819 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-54b65fbdd6-kb9rg" Mar 19 09:55:01.029031 master-0 kubenswrapper[27819]: I0319 09:55:01.028960 27819 scope.go:117] "RemoveContainer" containerID="e89c043f715c92ab3c228a470bb485aa633f42531b1b8de9e6d8ba3fe662cf3d" Mar 19 09:55:01.060213 master-0 kubenswrapper[27819]: I0319 09:55:01.060165 27819 scope.go:117] "RemoveContainer" containerID="ae0636da7880ae75a618ed37816678563df55b2ac15bec4865b8f95feccae8d5" Mar 19 09:55:01.081851 master-0 kubenswrapper[27819]: I0319 09:55:01.081789 27819 scope.go:117] "RemoveContainer" containerID="38e1c327f4f7ba04728e101a64a7122348c211af28a3706940a8e1d55b57a10a" Mar 19 09:56:00.917807 master-0 kubenswrapper[27819]: I0319 09:56:00.917742 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podUID="db5a3772-4afd-478b-85d3-dd454056f3b9" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.128.0.221:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:01.055826 master-0 kubenswrapper[27819]: I0319 09:56:01.055754 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-scheduler-0" podUID="d035bd9e-47a7-4d0a-a754-b98bc99dd02b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.222:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:01.106485 master-0 kubenswrapper[27819]: I0319 09:56:01.105846 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-backup-0" podUID="fc3c3fd4-1650-4012-9488-cba497b6776e" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.128.0.223:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:01.177338 master-0 kubenswrapper[27819]: I0319 09:56:01.177182 27819 scope.go:117] "RemoveContainer" containerID="2e2b4283771e634f66af83b320d8bb1c6334ae2d8468bf708b29ea5ace96a062" Mar 19 09:56:01.198162 master-0 kubenswrapper[27819]: I0319 09:56:01.198121 27819 scope.go:117] "RemoveContainer" containerID="773154f727dad2f4223683f37ce3ffd5f94657ca52e7a6ad956b5383cc2eda4e" Mar 19 09:56:01.223443 master-0 kubenswrapper[27819]: I0319 09:56:01.223413 27819 scope.go:117] "RemoveContainer" containerID="bc4f09ab2712fd65e4041d58288c309b337c057e27b8efa27bd516d2a8320730" Mar 19 09:56:05.960880 master-0 kubenswrapper[27819]: I0319 09:56:05.960763 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podUID="db5a3772-4afd-478b-85d3-dd454056f3b9" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.128.0.221:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:06.098048 master-0 kubenswrapper[27819]: I0319 09:56:06.097926 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-scheduler-0" podUID="d035bd9e-47a7-4d0a-a754-b98bc99dd02b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.222:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:06.148876 master-0 kubenswrapper[27819]: I0319 09:56:06.148773 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-backup-0" podUID="fc3c3fd4-1650-4012-9488-cba497b6776e" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.128.0.223:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:11.007979 master-0 kubenswrapper[27819]: I0319 09:56:11.007868 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podUID="db5a3772-4afd-478b-85d3-dd454056f3b9" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.128.0.221:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:11.007979 master-0 kubenswrapper[27819]: I0319 09:56:11.007987 27819 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:56:11.009030 master-0 kubenswrapper[27819]: I0319 09:56:11.008960 27819 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-volume" containerStatusID={"Type":"cri-o","ID":"37f15bed58477898871a1eecca1d2b2d15baef6fd04212bcc9c7c473fe158127"} pod="openstack/cinder-255d6-volume-lvm-iscsi-0" containerMessage="Container cinder-volume failed liveness probe, will be restarted" Mar 19 09:56:11.009110 master-0 kubenswrapper[27819]: I0319 09:56:11.009037 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" podUID="db5a3772-4afd-478b-85d3-dd454056f3b9" containerName="cinder-volume" containerID="cri-o://37f15bed58477898871a1eecca1d2b2d15baef6fd04212bcc9c7c473fe158127" gracePeriod=30 Mar 19 09:56:11.139804 master-0 kubenswrapper[27819]: I0319 09:56:11.139726 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-scheduler-0" podUID="d035bd9e-47a7-4d0a-a754-b98bc99dd02b" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.222:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:11.139804 master-0 kubenswrapper[27819]: I0319 09:56:11.139827 27819 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:56:11.140790 master-0 kubenswrapper[27819]: I0319 09:56:11.140752 27819 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"a6a69c774ccf2c1f843576e6258f5d70fc7a501a1840be52fcf238c4d496d0a5"} pod="openstack/cinder-255d6-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 19 09:56:11.140870 master-0 kubenswrapper[27819]: I0319 09:56:11.140817 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-scheduler-0" podUID="d035bd9e-47a7-4d0a-a754-b98bc99dd02b" containerName="cinder-scheduler" containerID="cri-o://a6a69c774ccf2c1f843576e6258f5d70fc7a501a1840be52fcf238c4d496d0a5" gracePeriod=30 Mar 19 09:56:11.190949 master-0 kubenswrapper[27819]: I0319 09:56:11.190871 27819 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-255d6-backup-0" podUID="fc3c3fd4-1650-4012-9488-cba497b6776e" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.128.0.223:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 19 09:56:11.191232 master-0 kubenswrapper[27819]: I0319 09:56:11.190996 27819 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-255d6-backup-0" Mar 19 09:56:11.195983 master-0 kubenswrapper[27819]: I0319 09:56:11.195917 27819 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-backup" containerStatusID={"Type":"cri-o","ID":"48a733ad4e71b35ecfc37b6249761c0b1fd2fcab223b4a02fabb5366e396eea7"} pod="openstack/cinder-255d6-backup-0" containerMessage="Container cinder-backup failed liveness probe, will be restarted" Mar 19 09:56:11.196192 master-0 kubenswrapper[27819]: I0319 09:56:11.196046 27819 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-255d6-backup-0" podUID="fc3c3fd4-1650-4012-9488-cba497b6776e" containerName="cinder-backup" containerID="cri-o://48a733ad4e71b35ecfc37b6249761c0b1fd2fcab223b4a02fabb5366e396eea7" gracePeriod=30 Mar 19 09:56:13.907133 master-0 kubenswrapper[27819]: I0319 09:56:13.907062 27819 generic.go:334] "Generic (PLEG): container finished" podID="d035bd9e-47a7-4d0a-a754-b98bc99dd02b" containerID="a6a69c774ccf2c1f843576e6258f5d70fc7a501a1840be52fcf238c4d496d0a5" exitCode=0 Mar 19 09:56:13.907133 master-0 kubenswrapper[27819]: I0319 09:56:13.907120 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"d035bd9e-47a7-4d0a-a754-b98bc99dd02b","Type":"ContainerDied","Data":"a6a69c774ccf2c1f843576e6258f5d70fc7a501a1840be52fcf238c4d496d0a5"} Mar 19 09:56:14.920316 master-0 kubenswrapper[27819]: I0319 09:56:14.920249 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-scheduler-0" event={"ID":"d035bd9e-47a7-4d0a-a754-b98bc99dd02b","Type":"ContainerStarted","Data":"4fad26aa292e89277d522770f3cafe2e707f99ad54cc31f4c9b18227a787fc62"} Mar 19 09:56:14.925426 master-0 kubenswrapper[27819]: I0319 09:56:14.925382 27819 generic.go:334] "Generic (PLEG): container finished" podID="db5a3772-4afd-478b-85d3-dd454056f3b9" containerID="37f15bed58477898871a1eecca1d2b2d15baef6fd04212bcc9c7c473fe158127" exitCode=0 Mar 19 09:56:14.925661 master-0 kubenswrapper[27819]: I0319 09:56:14.925482 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"db5a3772-4afd-478b-85d3-dd454056f3b9","Type":"ContainerDied","Data":"37f15bed58477898871a1eecca1d2b2d15baef6fd04212bcc9c7c473fe158127"} Mar 19 09:56:15.969372 master-0 kubenswrapper[27819]: I0319 09:56:15.969288 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" event={"ID":"db5a3772-4afd-478b-85d3-dd454056f3b9","Type":"ContainerStarted","Data":"b3cf719ab58ccb6b575f70fafed760106159cf06f9ff4d9e91da19997a22f2e5"} Mar 19 09:56:17.877251 master-0 kubenswrapper[27819]: I0319 09:56:17.876926 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:56:18.012897 master-0 kubenswrapper[27819]: I0319 09:56:18.012849 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:56:18.017705 master-0 kubenswrapper[27819]: I0319 09:56:18.017646 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"fc3c3fd4-1650-4012-9488-cba497b6776e","Type":"ContainerDied","Data":"48a733ad4e71b35ecfc37b6249761c0b1fd2fcab223b4a02fabb5366e396eea7"} Mar 19 09:56:18.017814 master-0 kubenswrapper[27819]: I0319 09:56:18.017659 27819 generic.go:334] "Generic (PLEG): container finished" podID="fc3c3fd4-1650-4012-9488-cba497b6776e" containerID="48a733ad4e71b35ecfc37b6249761c0b1fd2fcab223b4a02fabb5366e396eea7" exitCode=0 Mar 19 09:56:18.057114 master-0 kubenswrapper[27819]: E0319 09:56:18.057057 27819 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc3c3fd4_1650_4012_9488_cba497b6776e.slice/crio-conmon-48a733ad4e71b35ecfc37b6249761c0b1fd2fcab223b4a02fabb5366e396eea7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc3c3fd4_1650_4012_9488_cba497b6776e.slice/crio-48a733ad4e71b35ecfc37b6249761c0b1fd2fcab223b4a02fabb5366e396eea7.scope\": RecentStats: unable to find data in memory cache]" Mar 19 09:56:19.071611 master-0 kubenswrapper[27819]: I0319 09:56:19.061184 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-255d6-backup-0" event={"ID":"fc3c3fd4-1650-4012-9488-cba497b6776e","Type":"ContainerStarted","Data":"122f680f3e2e983274c5f5313e003303583fe18171381e6f2e34e251a9a38e74"} Mar 19 09:56:22.883536 master-0 kubenswrapper[27819]: I0319 09:56:22.883456 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-volume-lvm-iscsi-0" Mar 19 09:56:23.019168 master-0 kubenswrapper[27819]: I0319 09:56:23.019053 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-scheduler-0" Mar 19 09:56:23.064618 master-0 kubenswrapper[27819]: I0319 09:56:23.064500 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-255d6-backup-0" Mar 19 09:56:23.078366 master-0 kubenswrapper[27819]: I0319 09:56:23.078325 27819 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-255d6-backup-0" Mar 19 09:58:31.063306 master-0 kubenswrapper[27819]: I0319 09:58:31.063257 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-q4ngh"] Mar 19 09:58:31.077878 master-0 kubenswrapper[27819]: I0319 09:58:31.077827 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-q4ngh"] Mar 19 09:58:31.295289 master-0 kubenswrapper[27819]: I0319 09:58:31.295227 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60be1ce-b91b-497c-935b-2f8d245d6f8f" path="/var/lib/kubelet/pods/a60be1ce-b91b-497c-935b-2f8d245d6f8f/volumes" Mar 19 09:58:33.091664 master-0 kubenswrapper[27819]: I0319 09:58:33.091605 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6a2c-account-create-update-bhtkr"] Mar 19 09:58:33.109420 master-0 kubenswrapper[27819]: I0319 09:58:33.109129 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xnr86"] Mar 19 09:58:33.127150 master-0 kubenswrapper[27819]: I0319 09:58:33.127076 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xnr86"] Mar 19 09:58:33.153010 master-0 kubenswrapper[27819]: I0319 09:58:33.152942 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6a2c-account-create-update-bhtkr"] Mar 19 09:58:33.163048 master-0 kubenswrapper[27819]: I0319 09:58:33.162987 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-080c-account-create-update-9fljq"] Mar 19 09:58:33.182469 master-0 kubenswrapper[27819]: I0319 09:58:33.182388 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-080c-account-create-update-9fljq"] Mar 19 09:58:33.293304 master-0 kubenswrapper[27819]: I0319 09:58:33.293229 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e53946-899c-430a-8758-b8f7a30e3897" path="/var/lib/kubelet/pods/39e53946-899c-430a-8758-b8f7a30e3897/volumes" Mar 19 09:58:33.294041 master-0 kubenswrapper[27819]: I0319 09:58:33.294008 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a" path="/var/lib/kubelet/pods/48ee32e5-ced4-43d7-9ad6-5bf6ae3e6a7a/volumes" Mar 19 09:58:33.294833 master-0 kubenswrapper[27819]: I0319 09:58:33.294800 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce72d5ce-6259-47ba-a860-7b45dafbbf7a" path="/var/lib/kubelet/pods/ce72d5ce-6259-47ba-a860-7b45dafbbf7a/volumes" Mar 19 09:58:37.054691 master-0 kubenswrapper[27819]: I0319 09:58:37.054615 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-7mpd2"] Mar 19 09:58:37.061064 master-0 kubenswrapper[27819]: I0319 09:58:37.060993 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-d9a8-account-create-update-cqgzr"] Mar 19 09:58:37.070368 master-0 kubenswrapper[27819]: I0319 09:58:37.070312 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-7mpd2"] Mar 19 09:58:37.080291 master-0 kubenswrapper[27819]: I0319 09:58:37.080232 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-d9a8-account-create-update-cqgzr"] Mar 19 09:58:37.298372 master-0 kubenswrapper[27819]: I0319 09:58:37.298300 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48418558-addd-4db9-a62d-177832acc8db" path="/var/lib/kubelet/pods/48418558-addd-4db9-a62d-177832acc8db/volumes" Mar 19 09:58:37.299019 master-0 kubenswrapper[27819]: I0319 09:58:37.298987 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5" path="/var/lib/kubelet/pods/6c5649b3-75ac-4ffb-8aa8-d4bd877fdfc5/volumes" Mar 19 09:59:01.076765 master-0 kubenswrapper[27819]: I0319 09:59:01.076073 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-j2pq2"] Mar 19 09:59:01.095590 master-0 kubenswrapper[27819]: I0319 09:59:01.095503 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-j2pq2"] Mar 19 09:59:01.298715 master-0 kubenswrapper[27819]: I0319 09:59:01.298638 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269819a3-da02-4fe9-b75d-558dad4d418a" path="/var/lib/kubelet/pods/269819a3-da02-4fe9-b75d-558dad4d418a/volumes" Mar 19 09:59:01.359866 master-0 kubenswrapper[27819]: I0319 09:59:01.359805 27819 scope.go:117] "RemoveContainer" containerID="9b4dc71f83dabae1193cc17268f241173af34354bb14adfd68447e5040782063" Mar 19 09:59:01.389260 master-0 kubenswrapper[27819]: I0319 09:59:01.389207 27819 scope.go:117] "RemoveContainer" containerID="295597e41eb1a3bf943e421ab4b58d3e0a34d849839eede8c6fd6242988132fb" Mar 19 09:59:01.419760 master-0 kubenswrapper[27819]: I0319 09:59:01.419701 27819 scope.go:117] "RemoveContainer" containerID="f19dd365c3b67193f902e88c09f03cd802761549990916afe866396744a9a930" Mar 19 09:59:01.454765 master-0 kubenswrapper[27819]: I0319 09:59:01.454718 27819 scope.go:117] "RemoveContainer" containerID="b15635387ccd8991450e5027a429ddcaf5450adcd82415a41b41c675d9acf4cb" Mar 19 09:59:01.484535 master-0 kubenswrapper[27819]: I0319 09:59:01.484478 27819 scope.go:117] "RemoveContainer" containerID="e67e0b336cdbe58fcf98005551a867032fe99fca7066309b489ee6a847e55cc3" Mar 19 09:59:01.508272 master-0 kubenswrapper[27819]: I0319 09:59:01.508186 27819 scope.go:117] "RemoveContainer" containerID="16d7e745ecf80d32283d79a3110151f972abf05e38483efb25335c84976520f2" Mar 19 09:59:01.528574 master-0 kubenswrapper[27819]: I0319 09:59:01.528511 27819 scope.go:117] "RemoveContainer" containerID="7a16d8e44045d02299ec0518ea290b8605d9c2066bf78d9f1ece95319aed7c01" Mar 19 09:59:08.054039 master-0 kubenswrapper[27819]: I0319 09:59:08.051885 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ce1b-account-create-update-77p7q"] Mar 19 09:59:08.085652 master-0 kubenswrapper[27819]: I0319 09:59:08.085452 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qmknf"] Mar 19 09:59:08.099019 master-0 kubenswrapper[27819]: I0319 09:59:08.098969 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qmknf"] Mar 19 09:59:08.110464 master-0 kubenswrapper[27819]: I0319 09:59:08.110408 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9t78z"] Mar 19 09:59:08.122316 master-0 kubenswrapper[27819]: I0319 09:59:08.122262 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9abb-account-create-update-kjptp"] Mar 19 09:59:08.135472 master-0 kubenswrapper[27819]: I0319 09:59:08.135396 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ce1b-account-create-update-77p7q"] Mar 19 09:59:08.148355 master-0 kubenswrapper[27819]: I0319 09:59:08.148293 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9abb-account-create-update-kjptp"] Mar 19 09:59:08.160013 master-0 kubenswrapper[27819]: I0319 09:59:08.159950 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9t78z"] Mar 19 09:59:09.040491 master-0 kubenswrapper[27819]: I0319 09:59:09.040414 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jpnfs"] Mar 19 09:59:09.056421 master-0 kubenswrapper[27819]: I0319 09:59:09.056370 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jpnfs"] Mar 19 09:59:09.292153 master-0 kubenswrapper[27819]: I0319 09:59:09.292056 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87" path="/var/lib/kubelet/pods/1e5a9431-b9cb-4cc1-bfa7-07fcf20fbf87/volumes" Mar 19 09:59:09.293035 master-0 kubenswrapper[27819]: I0319 09:59:09.293018 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3deae5fc-655a-4c5f-be1b-486fddcfc606" path="/var/lib/kubelet/pods/3deae5fc-655a-4c5f-be1b-486fddcfc606/volumes" Mar 19 09:59:09.293920 master-0 kubenswrapper[27819]: I0319 09:59:09.293879 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881d4aba-28a0-45f1-b05f-c003b3b6b2ac" path="/var/lib/kubelet/pods/881d4aba-28a0-45f1-b05f-c003b3b6b2ac/volumes" Mar 19 09:59:09.294922 master-0 kubenswrapper[27819]: I0319 09:59:09.294903 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affc6a41-5d03-4f11-9415-2d17fee716d6" path="/var/lib/kubelet/pods/affc6a41-5d03-4f11-9415-2d17fee716d6/volumes" Mar 19 09:59:09.296283 master-0 kubenswrapper[27819]: I0319 09:59:09.296261 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea171b1-195d-427a-bbdc-80ac54af14bd" path="/var/lib/kubelet/pods/dea171b1-195d-427a-bbdc-80ac54af14bd/volumes" Mar 19 09:59:16.033301 master-0 kubenswrapper[27819]: I0319 09:59:16.033230 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4c4xb"] Mar 19 09:59:16.044513 master-0 kubenswrapper[27819]: I0319 09:59:16.044449 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4c4xb"] Mar 19 09:59:17.294778 master-0 kubenswrapper[27819]: I0319 09:59:17.294710 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206adece-9cbb-4c73-a7fa-b2ba36acbbee" path="/var/lib/kubelet/pods/206adece-9cbb-4c73-a7fa-b2ba36acbbee/volumes" Mar 19 09:59:23.049705 master-0 kubenswrapper[27819]: I0319 09:59:23.049618 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-fx99f"] Mar 19 09:59:23.062463 master-0 kubenswrapper[27819]: I0319 09:59:23.062407 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-569f-account-create-update-6h6km"] Mar 19 09:59:23.073579 master-0 kubenswrapper[27819]: I0319 09:59:23.073501 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-569f-account-create-update-6h6km"] Mar 19 09:59:23.087526 master-0 kubenswrapper[27819]: I0319 09:59:23.087448 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-fx99f"] Mar 19 09:59:23.292950 master-0 kubenswrapper[27819]: I0319 09:59:23.292887 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825e20ea-e29b-4aef-a7ab-3c3c92147e1f" path="/var/lib/kubelet/pods/825e20ea-e29b-4aef-a7ab-3c3c92147e1f/volumes" Mar 19 09:59:23.294214 master-0 kubenswrapper[27819]: I0319 09:59:23.294172 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbc1696-8633-4090-93f5-84b5ea19bc9a" path="/var/lib/kubelet/pods/cdbc1696-8633-4090-93f5-84b5ea19bc9a/volumes" Mar 19 09:59:44.036767 master-0 kubenswrapper[27819]: I0319 09:59:44.036197 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-s6mzj"] Mar 19 09:59:44.050339 master-0 kubenswrapper[27819]: I0319 09:59:44.050264 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-s6mzj"] Mar 19 09:59:45.296290 master-0 kubenswrapper[27819]: I0319 09:59:45.296209 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1abe1e88-82d4-488e-bd25-08cf29f5952e" path="/var/lib/kubelet/pods/1abe1e88-82d4-488e-bd25-08cf29f5952e/volumes" Mar 19 09:59:51.054994 master-0 kubenswrapper[27819]: I0319 09:59:51.054860 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-52q7d"] Mar 19 09:59:51.070809 master-0 kubenswrapper[27819]: I0319 09:59:51.070742 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-52q7d"] Mar 19 09:59:51.295280 master-0 kubenswrapper[27819]: I0319 09:59:51.295217 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9db4b54-f904-4d5e-95e6-93e2cee01d6b" path="/var/lib/kubelet/pods/b9db4b54-f904-4d5e-95e6-93e2cee01d6b/volumes" Mar 19 09:59:52.044034 master-0 kubenswrapper[27819]: I0319 09:59:52.043976 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4bmsm"] Mar 19 09:59:52.063109 master-0 kubenswrapper[27819]: I0319 09:59:52.063053 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-255d6-db-sync-fvnxz"] Mar 19 09:59:52.076887 master-0 kubenswrapper[27819]: I0319 09:59:52.076823 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4bmsm"] Mar 19 09:59:52.088604 master-0 kubenswrapper[27819]: I0319 09:59:52.088556 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-255d6-db-sync-fvnxz"] Mar 19 09:59:53.296990 master-0 kubenswrapper[27819]: I0319 09:59:53.296931 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ef7ab15-9976-4989-b837-55f0b27ee661" path="/var/lib/kubelet/pods/5ef7ab15-9976-4989-b837-55f0b27ee661/volumes" Mar 19 09:59:53.297618 master-0 kubenswrapper[27819]: I0319 09:59:53.297594 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0" path="/var/lib/kubelet/pods/77ccabdf-4c6c-45ca-aa14-2d893fcbb3b0/volumes" Mar 19 10:00:01.687712 master-0 kubenswrapper[27819]: I0319 10:00:01.687641 27819 scope.go:117] "RemoveContainer" containerID="6be53d27e56de3d8167bd2e882ee9c8fb9e090b57b58a1916585c9f92803a529" Mar 19 10:00:01.726659 master-0 kubenswrapper[27819]: I0319 10:00:01.726578 27819 scope.go:117] "RemoveContainer" containerID="b108245d4e4e64ade9bcd2737ecd345356a67a3af201f5d6c0ab09fe5b888925" Mar 19 10:00:01.749804 master-0 kubenswrapper[27819]: I0319 10:00:01.749646 27819 scope.go:117] "RemoveContainer" containerID="7bae05fb3383244813acfdfc734e22bbe682bdf7ca0d5badfc1f2e4831980e18" Mar 19 10:00:01.773984 master-0 kubenswrapper[27819]: I0319 10:00:01.773926 27819 scope.go:117] "RemoveContainer" containerID="d72df5c1a63d9aca4b027a171f2c23a686016bafa78561cbfa5b33001f62f3f3" Mar 19 10:00:01.794529 master-0 kubenswrapper[27819]: I0319 10:00:01.794487 27819 scope.go:117] "RemoveContainer" containerID="d8bdd42c4174286c5fb6e3942a1b4408930a558c5d95acc4f667196df39cbcd2" Mar 19 10:00:01.815189 master-0 kubenswrapper[27819]: I0319 10:00:01.815092 27819 scope.go:117] "RemoveContainer" containerID="8daa0a2f6773947968d257246c818f6a753d54769d10d53dd3d9c27e5eef54a4" Mar 19 10:00:01.842289 master-0 kubenswrapper[27819]: I0319 10:00:01.842227 27819 scope.go:117] "RemoveContainer" containerID="63c1e554ea276bab320ed3645bc7a4e6a2d6214271cd866718b05d01add4f026" Mar 19 10:00:01.870500 master-0 kubenswrapper[27819]: I0319 10:00:01.870457 27819 scope.go:117] "RemoveContainer" containerID="cda346a18e3ae817a45f054731ddfc16d70398e0a32ef37a8200dfe237c6efb2" Mar 19 10:00:01.892284 master-0 kubenswrapper[27819]: I0319 10:00:01.892224 27819 scope.go:117] "RemoveContainer" containerID="16efc548ff718c4742f6ef96f68919148a2e5375bb900f86044285f55d8e809e" Mar 19 10:00:01.914868 master-0 kubenswrapper[27819]: I0319 10:00:01.914822 27819 scope.go:117] "RemoveContainer" containerID="ab59f40d7873631d7e7fe0fbe85ee5f09c8af548517b6ec61865d1d210deda3c" Mar 19 10:00:01.937655 master-0 kubenswrapper[27819]: I0319 10:00:01.937521 27819 scope.go:117] "RemoveContainer" containerID="a29063fdc487022c455c11e3e132c93389620ff41f4f30e98cee22be95288ab1" Mar 19 10:00:01.971759 master-0 kubenswrapper[27819]: I0319 10:00:01.971719 27819 scope.go:117] "RemoveContainer" containerID="4dd2ab6395f5552881bbd2f281e57913e3797d55720f18a136da30f1cd9b2f42" Mar 19 10:00:11.044862 master-0 kubenswrapper[27819]: I0319 10:00:11.044789 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-975s6"] Mar 19 10:00:11.057683 master-0 kubenswrapper[27819]: I0319 10:00:11.057607 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-975s6"] Mar 19 10:00:11.297726 master-0 kubenswrapper[27819]: I0319 10:00:11.296826 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d034a2-6b7a-41f4-904d-f333f1ca8605" path="/var/lib/kubelet/pods/23d034a2-6b7a-41f4-904d-f333f1ca8605/volumes" Mar 19 10:00:17.038964 master-0 kubenswrapper[27819]: I0319 10:00:17.038897 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-mf842"] Mar 19 10:00:17.062328 master-0 kubenswrapper[27819]: I0319 10:00:17.062269 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-mf842"] Mar 19 10:00:17.296699 master-0 kubenswrapper[27819]: I0319 10:00:17.296466 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83" path="/var/lib/kubelet/pods/3362bb92-8eb7-4bd5-b6cc-f3d6b3f29d83/volumes" Mar 19 10:00:19.052662 master-0 kubenswrapper[27819]: I0319 10:00:19.052292 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-fd91-account-create-update-2vq5k"] Mar 19 10:00:19.062609 master-0 kubenswrapper[27819]: I0319 10:00:19.062519 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-fd91-account-create-update-2vq5k"] Mar 19 10:00:19.293684 master-0 kubenswrapper[27819]: I0319 10:00:19.293609 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b796c7-2fa1-405c-b2b4-5bccee70d82b" path="/var/lib/kubelet/pods/87b796c7-2fa1-405c-b2b4-5bccee70d82b/volumes" Mar 19 10:00:48.043967 master-0 kubenswrapper[27819]: I0319 10:00:48.043831 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lt5cm"] Mar 19 10:00:48.056257 master-0 kubenswrapper[27819]: I0319 10:00:48.056158 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lt5cm"] Mar 19 10:00:49.293793 master-0 kubenswrapper[27819]: I0319 10:00:49.293677 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d91904e-bd5c-4efd-85c9-569efa06f557" path="/var/lib/kubelet/pods/2d91904e-bd5c-4efd-85c9-569efa06f557/volumes" Mar 19 10:00:55.073101 master-0 kubenswrapper[27819]: I0319 10:00:55.072832 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-b706-account-create-update-xlr8d"] Mar 19 10:00:55.089032 master-0 kubenswrapper[27819]: I0319 10:00:55.088950 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c2c5-account-create-update-slm57"] Mar 19 10:00:55.105478 master-0 kubenswrapper[27819]: I0319 10:00:55.105375 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-wp7cf"] Mar 19 10:00:55.120278 master-0 kubenswrapper[27819]: I0319 10:00:55.120188 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ddf5-account-create-update-sl76f"] Mar 19 10:00:55.133963 master-0 kubenswrapper[27819]: I0319 10:00:55.133874 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5fvkc"] Mar 19 10:00:55.166100 master-0 kubenswrapper[27819]: I0319 10:00:55.166006 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-nbsd2"] Mar 19 10:00:55.180340 master-0 kubenswrapper[27819]: I0319 10:00:55.180242 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ddf5-account-create-update-sl76f"] Mar 19 10:00:55.195732 master-0 kubenswrapper[27819]: I0319 10:00:55.195110 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-b706-account-create-update-xlr8d"] Mar 19 10:00:55.207160 master-0 kubenswrapper[27819]: I0319 10:00:55.207011 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c2c5-account-create-update-slm57"] Mar 19 10:00:55.221157 master-0 kubenswrapper[27819]: I0319 10:00:55.221093 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-wp7cf"] Mar 19 10:00:55.238780 master-0 kubenswrapper[27819]: I0319 10:00:55.238719 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5fvkc"] Mar 19 10:00:55.252719 master-0 kubenswrapper[27819]: I0319 10:00:55.252610 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-nbsd2"] Mar 19 10:00:55.298686 master-0 kubenswrapper[27819]: I0319 10:00:55.298589 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2828d124-ef3e-4f24-89ab-4eca7d22c966" path="/var/lib/kubelet/pods/2828d124-ef3e-4f24-89ab-4eca7d22c966/volumes" Mar 19 10:00:55.299608 master-0 kubenswrapper[27819]: I0319 10:00:55.299568 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc92a9e-be85-46cb-beef-01cd2ded3c3a" path="/var/lib/kubelet/pods/3fc92a9e-be85-46cb-beef-01cd2ded3c3a/volumes" Mar 19 10:00:55.300147 master-0 kubenswrapper[27819]: I0319 10:00:55.300107 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dde7f0b-6f7c-461d-9749-66777abb0610" path="/var/lib/kubelet/pods/4dde7f0b-6f7c-461d-9749-66777abb0610/volumes" Mar 19 10:00:55.301199 master-0 kubenswrapper[27819]: I0319 10:00:55.301151 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f01e2e4-dcb6-4524-bf81-076a2768309d" path="/var/lib/kubelet/pods/6f01e2e4-dcb6-4524-bf81-076a2768309d/volumes" Mar 19 10:00:55.301791 master-0 kubenswrapper[27819]: I0319 10:00:55.301722 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4" path="/var/lib/kubelet/pods/c1a6d30b-4ca2-469f-9ccc-35bb03d09cc4/volumes" Mar 19 10:00:55.302421 master-0 kubenswrapper[27819]: I0319 10:00:55.302336 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e17be3-0a3b-485f-8259-6f2b66f275a6" path="/var/lib/kubelet/pods/c4e17be3-0a3b-485f-8259-6f2b66f275a6/volumes" Mar 19 10:01:00.211787 master-0 kubenswrapper[27819]: I0319 10:01:00.211701 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29565241-nbvhk"] Mar 19 10:01:00.214495 master-0 kubenswrapper[27819]: I0319 10:01:00.214444 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.247795 master-0 kubenswrapper[27819]: I0319 10:01:00.247751 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29565241-nbvhk"] Mar 19 10:01:00.317516 master-0 kubenswrapper[27819]: I0319 10:01:00.317452 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-combined-ca-bundle\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.317757 master-0 kubenswrapper[27819]: I0319 10:01:00.317533 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwhj\" (UniqueName: \"kubernetes.io/projected/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-kube-api-access-9pwhj\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.317944 master-0 kubenswrapper[27819]: I0319 10:01:00.317910 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-fernet-keys\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.318108 master-0 kubenswrapper[27819]: I0319 10:01:00.318093 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-config-data\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.421241 master-0 kubenswrapper[27819]: I0319 10:01:00.421159 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-combined-ca-bundle\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.421501 master-0 kubenswrapper[27819]: I0319 10:01:00.421389 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwhj\" (UniqueName: \"kubernetes.io/projected/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-kube-api-access-9pwhj\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.421501 master-0 kubenswrapper[27819]: I0319 10:01:00.421468 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-fernet-keys\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.421623 master-0 kubenswrapper[27819]: I0319 10:01:00.421529 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-config-data\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.432629 master-0 kubenswrapper[27819]: I0319 10:01:00.426071 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-combined-ca-bundle\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.432629 master-0 kubenswrapper[27819]: I0319 10:01:00.426194 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-config-data\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.432629 master-0 kubenswrapper[27819]: I0319 10:01:00.426390 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-fernet-keys\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.439498 master-0 kubenswrapper[27819]: I0319 10:01:00.439405 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwhj\" (UniqueName: \"kubernetes.io/projected/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-kube-api-access-9pwhj\") pod \"keystone-cron-29565241-nbvhk\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:00.547115 master-0 kubenswrapper[27819]: I0319 10:01:00.546997 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:01.078279 master-0 kubenswrapper[27819]: I0319 10:01:01.077963 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29565241-nbvhk"] Mar 19 10:01:01.085297 master-0 kubenswrapper[27819]: W0319 10:01:01.085231 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode66c12a2_9fc0_4fef_97ec_e2515d2d67c9.slice/crio-b1a4cf252b67aea73f66fa9dbc6cdc7ac66adb7dc8b384984345bfa9d88ced61 WatchSource:0}: Error finding container b1a4cf252b67aea73f66fa9dbc6cdc7ac66adb7dc8b384984345bfa9d88ced61: Status 404 returned error can't find the container with id b1a4cf252b67aea73f66fa9dbc6cdc7ac66adb7dc8b384984345bfa9d88ced61 Mar 19 10:01:01.421806 master-0 kubenswrapper[27819]: I0319 10:01:01.421715 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565241-nbvhk" event={"ID":"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9","Type":"ContainerStarted","Data":"8cd3ad6f1e81ad5e32dce99cb07f523ccd99b4d41001749d6c47776891f90804"} Mar 19 10:01:01.421806 master-0 kubenswrapper[27819]: I0319 10:01:01.421788 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565241-nbvhk" event={"ID":"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9","Type":"ContainerStarted","Data":"b1a4cf252b67aea73f66fa9dbc6cdc7ac66adb7dc8b384984345bfa9d88ced61"} Mar 19 10:01:02.219794 master-0 kubenswrapper[27819]: I0319 10:01:02.219727 27819 scope.go:117] "RemoveContainer" containerID="89c3c6a975cbc32e5a46634dd1b98c41e029adc09b187e8e674d3702acf5166d" Mar 19 10:01:02.242867 master-0 kubenswrapper[27819]: I0319 10:01:02.242814 27819 scope.go:117] "RemoveContainer" containerID="505d3486d4d800c7f96de2e2e31705acf6b2d6075e08ede90082502df9785fe3" Mar 19 10:01:02.263487 master-0 kubenswrapper[27819]: I0319 10:01:02.263435 27819 scope.go:117] "RemoveContainer" containerID="f19c21ac1eff2e7b5b276132cc723f0087f538ec157ad23acf28d1e571aa62bd" Mar 19 10:01:02.285075 master-0 kubenswrapper[27819]: I0319 10:01:02.285002 27819 scope.go:117] "RemoveContainer" containerID="0d072340e9c5d8c8a8ada8340eaf2e2cc57090a8105c9a970ee4899ea981242b" Mar 19 10:01:02.307011 master-0 kubenswrapper[27819]: I0319 10:01:02.306952 27819 scope.go:117] "RemoveContainer" containerID="76ee1f2703359f9ccf0890f11aaf8273188d893651abce2bf83630ba2a84a232" Mar 19 10:01:02.329768 master-0 kubenswrapper[27819]: I0319 10:01:02.329697 27819 scope.go:117] "RemoveContainer" containerID="013a80d69723fbd7b8f9139ea3fbff2996507329f79450900b08b962f86aadb8" Mar 19 10:01:02.379192 master-0 kubenswrapper[27819]: I0319 10:01:02.379144 27819 scope.go:117] "RemoveContainer" containerID="b5c48c7fbdde94ca974b5f96e8471c98a7ccfd64b116eff8a658403787596e23" Mar 19 10:01:02.401087 master-0 kubenswrapper[27819]: I0319 10:01:02.401025 27819 scope.go:117] "RemoveContainer" containerID="88335740d680312e38e2c34d8364c0a2784671f1b0560db4a354b24ee3313ab1" Mar 19 10:01:02.443936 master-0 kubenswrapper[27819]: I0319 10:01:02.443876 27819 scope.go:117] "RemoveContainer" containerID="c72f57e99001c8e3c467bcb91a14f18bbb6c18a2f184bc10d2f9242d2426f269" Mar 19 10:01:02.467678 master-0 kubenswrapper[27819]: I0319 10:01:02.467635 27819 scope.go:117] "RemoveContainer" containerID="5fd2f916ebbcf63bd81d75b952ae169217a1bb4e6428f0c8481aeb088c2f10ba" Mar 19 10:01:02.486924 master-0 kubenswrapper[27819]: I0319 10:01:02.486880 27819 scope.go:117] "RemoveContainer" containerID="2d9370bd03b3b66b345c07fc6d14826e7547d65fcec0b74c8b9cfb0f6cbccb04" Mar 19 10:01:05.493912 master-0 kubenswrapper[27819]: I0319 10:01:05.493769 27819 generic.go:334] "Generic (PLEG): container finished" podID="e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" containerID="8cd3ad6f1e81ad5e32dce99cb07f523ccd99b4d41001749d6c47776891f90804" exitCode=0 Mar 19 10:01:05.493912 master-0 kubenswrapper[27819]: I0319 10:01:05.493826 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565241-nbvhk" event={"ID":"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9","Type":"ContainerDied","Data":"8cd3ad6f1e81ad5e32dce99cb07f523ccd99b4d41001749d6c47776891f90804"} Mar 19 10:01:06.914953 master-0 kubenswrapper[27819]: I0319 10:01:06.914885 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:07.004922 master-0 kubenswrapper[27819]: I0319 10:01:07.004843 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-config-data\") pod \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " Mar 19 10:01:07.004922 master-0 kubenswrapper[27819]: I0319 10:01:07.004922 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-combined-ca-bundle\") pod \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " Mar 19 10:01:07.005201 master-0 kubenswrapper[27819]: I0319 10:01:07.005017 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-fernet-keys\") pod \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " Mar 19 10:01:07.005269 master-0 kubenswrapper[27819]: I0319 10:01:07.005240 27819 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pwhj\" (UniqueName: \"kubernetes.io/projected/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-kube-api-access-9pwhj\") pod \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\" (UID: \"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9\") " Mar 19 10:01:07.009157 master-0 kubenswrapper[27819]: I0319 10:01:07.009106 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" (UID: "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 10:01:07.012992 master-0 kubenswrapper[27819]: I0319 10:01:07.012919 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-kube-api-access-9pwhj" (OuterVolumeSpecName: "kube-api-access-9pwhj") pod "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" (UID: "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9"). InnerVolumeSpecName "kube-api-access-9pwhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 19 10:01:07.039865 master-0 kubenswrapper[27819]: I0319 10:01:07.039799 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" (UID: "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 10:01:07.071264 master-0 kubenswrapper[27819]: I0319 10:01:07.071198 27819 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-config-data" (OuterVolumeSpecName: "config-data") pod "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" (UID: "e66c12a2-9fc0-4fef-97ec-e2515d2d67c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 19 10:01:07.107891 master-0 kubenswrapper[27819]: I0319 10:01:07.107819 27819 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pwhj\" (UniqueName: \"kubernetes.io/projected/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-kube-api-access-9pwhj\") on node \"master-0\" DevicePath \"\"" Mar 19 10:01:07.107891 master-0 kubenswrapper[27819]: I0319 10:01:07.107872 27819 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-config-data\") on node \"master-0\" DevicePath \"\"" Mar 19 10:01:07.107891 master-0 kubenswrapper[27819]: I0319 10:01:07.107887 27819 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 19 10:01:07.107891 master-0 kubenswrapper[27819]: I0319 10:01:07.107901 27819 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e66c12a2-9fc0-4fef-97ec-e2515d2d67c9-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 19 10:01:07.516053 master-0 kubenswrapper[27819]: I0319 10:01:07.515982 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29565241-nbvhk" event={"ID":"e66c12a2-9fc0-4fef-97ec-e2515d2d67c9","Type":"ContainerDied","Data":"b1a4cf252b67aea73f66fa9dbc6cdc7ac66adb7dc8b384984345bfa9d88ced61"} Mar 19 10:01:07.516053 master-0 kubenswrapper[27819]: I0319 10:01:07.516030 27819 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29565241-nbvhk" Mar 19 10:01:07.517099 master-0 kubenswrapper[27819]: I0319 10:01:07.516051 27819 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a4cf252b67aea73f66fa9dbc6cdc7ac66adb7dc8b384984345bfa9d88ced61" Mar 19 10:01:32.053874 master-0 kubenswrapper[27819]: I0319 10:01:32.053802 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cflgz"] Mar 19 10:01:32.067337 master-0 kubenswrapper[27819]: I0319 10:01:32.067274 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-cflgz"] Mar 19 10:01:33.301296 master-0 kubenswrapper[27819]: I0319 10:01:33.301227 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d45107-4dfc-40e2-b122-216848830469" path="/var/lib/kubelet/pods/30d45107-4dfc-40e2-b122-216848830469/volumes" Mar 19 10:02:02.719219 master-0 kubenswrapper[27819]: I0319 10:02:02.719128 27819 scope.go:117] "RemoveContainer" containerID="76dc7f6b49e73e06dd7fbbd347b395f75e95df5f0d47ffe39d7fda64ae9d7e23" Mar 19 10:02:03.061340 master-0 kubenswrapper[27819]: I0319 10:02:03.061195 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fg4fw"] Mar 19 10:02:03.084234 master-0 kubenswrapper[27819]: I0319 10:02:03.084151 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5k89"] Mar 19 10:02:03.093695 master-0 kubenswrapper[27819]: I0319 10:02:03.093648 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fg4fw"] Mar 19 10:02:03.103169 master-0 kubenswrapper[27819]: I0319 10:02:03.103092 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-v5k89"] Mar 19 10:02:03.318101 master-0 kubenswrapper[27819]: I0319 10:02:03.317948 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63105ef4-b1a4-4f96-a846-e5a5f751b7b9" path="/var/lib/kubelet/pods/63105ef4-b1a4-4f96-a846-e5a5f751b7b9/volumes" Mar 19 10:02:03.321437 master-0 kubenswrapper[27819]: I0319 10:02:03.318794 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b8a71d-51a3-434e-92dd-4e7e341898f5" path="/var/lib/kubelet/pods/87b8a71d-51a3-434e-92dd-4e7e341898f5/volumes" Mar 19 10:02:39.057683 master-0 kubenswrapper[27819]: I0319 10:02:39.057497 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-fqjbw"] Mar 19 10:02:39.069172 master-0 kubenswrapper[27819]: I0319 10:02:39.069121 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-fqjbw"] Mar 19 10:02:39.301129 master-0 kubenswrapper[27819]: I0319 10:02:39.300333 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a640c3-526f-4686-be45-1e77d19d22e5" path="/var/lib/kubelet/pods/c5a640c3-526f-4686-be45-1e77d19d22e5/volumes" Mar 19 10:02:41.036905 master-0 kubenswrapper[27819]: I0319 10:02:41.036848 27819 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-llz6n"] Mar 19 10:02:41.050672 master-0 kubenswrapper[27819]: I0319 10:02:41.050602 27819 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-llz6n"] Mar 19 10:02:41.338763 master-0 kubenswrapper[27819]: I0319 10:02:41.338638 27819 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da9159b-7125-4a77-ae29-410332c3be7c" path="/var/lib/kubelet/pods/2da9159b-7125-4a77-ae29-410332c3be7c/volumes" Mar 19 10:03:02.814939 master-0 kubenswrapper[27819]: I0319 10:03:02.814873 27819 scope.go:117] "RemoveContainer" containerID="115d3a998584e2b265ad430c72cec96f2f7b79b5f2b8a01f091215a6bf166ca6" Mar 19 10:03:02.839043 master-0 kubenswrapper[27819]: I0319 10:03:02.837781 27819 scope.go:117] "RemoveContainer" containerID="d8cbefbe12be873f82fe6252c1dd4dcb6055dd79cd52016c6c1b225b747b9ad2" Mar 19 10:03:02.859961 master-0 kubenswrapper[27819]: I0319 10:03:02.859911 27819 scope.go:117] "RemoveContainer" containerID="66b35d3cda789cf6068630ed695438ae03f27b1d4d4c45f188ba0c93b84638a9" Mar 19 10:03:02.879180 master-0 kubenswrapper[27819]: I0319 10:03:02.879142 27819 scope.go:117] "RemoveContainer" containerID="b09ec752e758d6653ed9b6cadec20d81f7389748bcb277605dd2288b651a6ca9" Mar 19 10:16:32.708935 master-0 kubenswrapper[27819]: I0319 10:16:32.708871 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nzsnw/must-gather-8lkjf"] Mar 19 10:16:32.720249 master-0 kubenswrapper[27819]: E0319 10:16:32.719675 27819 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" containerName="keystone-cron" Mar 19 10:16:32.721503 master-0 kubenswrapper[27819]: I0319 10:16:32.721333 27819 state_mem.go:107] "Deleted CPUSet assignment" podUID="e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" containerName="keystone-cron" Mar 19 10:16:32.722279 master-0 kubenswrapper[27819]: I0319 10:16:32.722238 27819 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66c12a2-9fc0-4fef-97ec-e2515d2d67c9" containerName="keystone-cron" Mar 19 10:16:32.727658 master-0 kubenswrapper[27819]: I0319 10:16:32.727526 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nzsnw/must-gather-22j88"] Mar 19 10:16:32.727886 master-0 kubenswrapper[27819]: I0319 10:16:32.727664 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:32.735607 master-0 kubenswrapper[27819]: I0319 10:16:32.733251 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nzsnw"/"openshift-service-ca.crt" Mar 19 10:16:32.736019 master-0 kubenswrapper[27819]: I0319 10:16:32.735943 27819 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nzsnw"/"kube-root-ca.crt" Mar 19 10:16:32.741167 master-0 kubenswrapper[27819]: I0319 10:16:32.741114 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:32.760958 master-0 kubenswrapper[27819]: I0319 10:16:32.756410 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nzsnw/must-gather-8lkjf"] Mar 19 10:16:32.804748 master-0 kubenswrapper[27819]: I0319 10:16:32.804607 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nzsnw/must-gather-22j88"] Mar 19 10:16:32.852752 master-0 kubenswrapper[27819]: I0319 10:16:32.852672 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669wm\" (UniqueName: \"kubernetes.io/projected/586e3604-38c8-42e8-b4ab-a98e2cad1175-kube-api-access-669wm\") pod \"must-gather-8lkjf\" (UID: \"586e3604-38c8-42e8-b4ab-a98e2cad1175\") " pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:32.853205 master-0 kubenswrapper[27819]: I0319 10:16:32.853101 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjfz\" (UniqueName: \"kubernetes.io/projected/dd54291f-38bd-4e3e-993f-e7ecfa5bb166-kube-api-access-2rjfz\") pod \"must-gather-22j88\" (UID: \"dd54291f-38bd-4e3e-993f-e7ecfa5bb166\") " pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:32.853263 master-0 kubenswrapper[27819]: I0319 10:16:32.853212 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586e3604-38c8-42e8-b4ab-a98e2cad1175-must-gather-output\") pod \"must-gather-8lkjf\" (UID: \"586e3604-38c8-42e8-b4ab-a98e2cad1175\") " pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:32.853357 master-0 kubenswrapper[27819]: I0319 10:16:32.853326 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd54291f-38bd-4e3e-993f-e7ecfa5bb166-must-gather-output\") pod \"must-gather-22j88\" (UID: \"dd54291f-38bd-4e3e-993f-e7ecfa5bb166\") " pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:32.959725 master-0 kubenswrapper[27819]: I0319 10:16:32.959562 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjfz\" (UniqueName: \"kubernetes.io/projected/dd54291f-38bd-4e3e-993f-e7ecfa5bb166-kube-api-access-2rjfz\") pod \"must-gather-22j88\" (UID: \"dd54291f-38bd-4e3e-993f-e7ecfa5bb166\") " pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:32.959725 master-0 kubenswrapper[27819]: I0319 10:16:32.959721 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586e3604-38c8-42e8-b4ab-a98e2cad1175-must-gather-output\") pod \"must-gather-8lkjf\" (UID: \"586e3604-38c8-42e8-b4ab-a98e2cad1175\") " pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:32.960020 master-0 kubenswrapper[27819]: I0319 10:16:32.959831 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd54291f-38bd-4e3e-993f-e7ecfa5bb166-must-gather-output\") pod \"must-gather-22j88\" (UID: \"dd54291f-38bd-4e3e-993f-e7ecfa5bb166\") " pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:32.960020 master-0 kubenswrapper[27819]: I0319 10:16:32.959972 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-669wm\" (UniqueName: \"kubernetes.io/projected/586e3604-38c8-42e8-b4ab-a98e2cad1175-kube-api-access-669wm\") pod \"must-gather-8lkjf\" (UID: \"586e3604-38c8-42e8-b4ab-a98e2cad1175\") " pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:32.960365 master-0 kubenswrapper[27819]: I0319 10:16:32.960301 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/586e3604-38c8-42e8-b4ab-a98e2cad1175-must-gather-output\") pod \"must-gather-8lkjf\" (UID: \"586e3604-38c8-42e8-b4ab-a98e2cad1175\") " pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:32.960365 master-0 kubenswrapper[27819]: I0319 10:16:32.960342 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/dd54291f-38bd-4e3e-993f-e7ecfa5bb166-must-gather-output\") pod \"must-gather-22j88\" (UID: \"dd54291f-38bd-4e3e-993f-e7ecfa5bb166\") " pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:33.156564 master-0 kubenswrapper[27819]: I0319 10:16:33.156284 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjfz\" (UniqueName: \"kubernetes.io/projected/dd54291f-38bd-4e3e-993f-e7ecfa5bb166-kube-api-access-2rjfz\") pod \"must-gather-22j88\" (UID: \"dd54291f-38bd-4e3e-993f-e7ecfa5bb166\") " pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:33.168568 master-0 kubenswrapper[27819]: I0319 10:16:33.167322 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-669wm\" (UniqueName: \"kubernetes.io/projected/586e3604-38c8-42e8-b4ab-a98e2cad1175-kube-api-access-669wm\") pod \"must-gather-8lkjf\" (UID: \"586e3604-38c8-42e8-b4ab-a98e2cad1175\") " pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:33.370372 master-0 kubenswrapper[27819]: I0319 10:16:33.370322 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nzsnw/must-gather-8lkjf" Mar 19 10:16:33.397474 master-0 kubenswrapper[27819]: I0319 10:16:33.397402 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nzsnw/must-gather-22j88" Mar 19 10:16:34.066826 master-0 kubenswrapper[27819]: I0319 10:16:34.065414 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nzsnw/must-gather-8lkjf"] Mar 19 10:16:34.075448 master-0 kubenswrapper[27819]: W0319 10:16:34.074923 27819 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod586e3604_38c8_42e8_b4ab_a98e2cad1175.slice/crio-0c8a7f5ef9eabc6a1db7369a0abf6fb7823b06aa5b1031d5a5398faff377e1a1 WatchSource:0}: Error finding container 0c8a7f5ef9eabc6a1db7369a0abf6fb7823b06aa5b1031d5a5398faff377e1a1: Status 404 returned error can't find the container with id 0c8a7f5ef9eabc6a1db7369a0abf6fb7823b06aa5b1031d5a5398faff377e1a1 Mar 19 10:16:34.077778 master-0 kubenswrapper[27819]: I0319 10:16:34.077742 27819 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 19 10:16:34.079180 master-0 kubenswrapper[27819]: I0319 10:16:34.078992 27819 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nzsnw/must-gather-22j88"] Mar 19 10:16:34.668076 master-0 kubenswrapper[27819]: I0319 10:16:34.666605 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nzsnw/must-gather-8lkjf" event={"ID":"586e3604-38c8-42e8-b4ab-a98e2cad1175","Type":"ContainerStarted","Data":"0c8a7f5ef9eabc6a1db7369a0abf6fb7823b06aa5b1031d5a5398faff377e1a1"} Mar 19 10:16:34.668076 master-0 kubenswrapper[27819]: I0319 10:16:34.667841 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nzsnw/must-gather-22j88" event={"ID":"dd54291f-38bd-4e3e-993f-e7ecfa5bb166","Type":"ContainerStarted","Data":"d16f0c08b51339393bea3c560ffc75f8008e9f971fc5de01e182a690d67888f2"} Mar 19 10:16:41.767994 master-0 kubenswrapper[27819]: I0319 10:16:41.767861 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nzsnw/must-gather-22j88" event={"ID":"dd54291f-38bd-4e3e-993f-e7ecfa5bb166","Type":"ContainerStarted","Data":"69adabeb5beb797b29f257770adf5d65e88d9d7f1c09920ba23f6c4d9d127b66"} Mar 19 10:16:41.767994 master-0 kubenswrapper[27819]: I0319 10:16:41.767924 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nzsnw/must-gather-22j88" event={"ID":"dd54291f-38bd-4e3e-993f-e7ecfa5bb166","Type":"ContainerStarted","Data":"ca7aabcb0ca2d48cf5598f0bcff68ae4d254d7f6f9eb736dd8cea9b01622fc00"} Mar 19 10:16:41.807746 master-0 kubenswrapper[27819]: I0319 10:16:41.807666 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nzsnw/must-gather-22j88" podStartSLOduration=2.93291232 podStartE2EDuration="9.80762707s" podCreationTimestamp="2026-03-19 10:16:32 +0000 UTC" firstStartedPulling="2026-03-19 10:16:34.08718976 +0000 UTC m=+2579.008767462" lastFinishedPulling="2026-03-19 10:16:40.96190452 +0000 UTC m=+2585.883482212" observedRunningTime="2026-03-19 10:16:41.805865542 +0000 UTC m=+2586.727443244" watchObservedRunningTime="2026-03-19 10:16:41.80762707 +0000 UTC m=+2586.729204772" Mar 19 10:16:42.779481 master-0 kubenswrapper[27819]: I0319 10:16:42.779411 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nzsnw/must-gather-8lkjf" event={"ID":"586e3604-38c8-42e8-b4ab-a98e2cad1175","Type":"ContainerStarted","Data":"fda51115ea12445a424253c7b4db4af9acc773bed07040635bc142fd614bd4a8"} Mar 19 10:16:43.792380 master-0 kubenswrapper[27819]: I0319 10:16:43.792317 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nzsnw/must-gather-8lkjf" event={"ID":"586e3604-38c8-42e8-b4ab-a98e2cad1175","Type":"ContainerStarted","Data":"ce0dfdfe1fffc563ad7eb5631eef5c0e98551d1062fa0c5d0898946013377683"} Mar 19 10:16:44.891197 master-0 kubenswrapper[27819]: I0319 10:16:44.891106 27819 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nzsnw/must-gather-8lkjf" podStartSLOduration=6.16472691 podStartE2EDuration="12.891085962s" podCreationTimestamp="2026-03-19 10:16:32 +0000 UTC" firstStartedPulling="2026-03-19 10:16:34.077646692 +0000 UTC m=+2578.999224384" lastFinishedPulling="2026-03-19 10:16:40.804005734 +0000 UTC m=+2585.725583436" observedRunningTime="2026-03-19 10:16:44.604697867 +0000 UTC m=+2589.526275559" watchObservedRunningTime="2026-03-19 10:16:44.891085962 +0000 UTC m=+2589.812663654" Mar 19 10:16:46.889333 master-0 kubenswrapper[27819]: I0319 10:16:46.889271 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-dgqfl_ded5da9a-1447-46df-a8ff-ffd469562599/cluster-version-operator/0.log" Mar 19 10:16:47.165665 master-0 kubenswrapper[27819]: I0319 10:16:47.164662 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-dgqfl_ded5da9a-1447-46df-a8ff-ffd469562599/cluster-version-operator/1.log" Mar 19 10:16:49.657410 master-0 kubenswrapper[27819]: I0319 10:16:49.657329 27819 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nzsnw/master-0-debug-9pjbl"] Mar 19 10:16:49.659076 master-0 kubenswrapper[27819]: I0319 10:16:49.659041 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:49.798100 master-0 kubenswrapper[27819]: I0319 10:16:49.798038 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzrjw\" (UniqueName: \"kubernetes.io/projected/4d1af453-0dd6-4220-9ab7-f118deee876f-kube-api-access-gzrjw\") pod \"master-0-debug-9pjbl\" (UID: \"4d1af453-0dd6-4220-9ab7-f118deee876f\") " pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:49.798338 master-0 kubenswrapper[27819]: I0319 10:16:49.798196 27819 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d1af453-0dd6-4220-9ab7-f118deee876f-host\") pod \"master-0-debug-9pjbl\" (UID: \"4d1af453-0dd6-4220-9ab7-f118deee876f\") " pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:49.901512 master-0 kubenswrapper[27819]: I0319 10:16:49.900860 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzrjw\" (UniqueName: \"kubernetes.io/projected/4d1af453-0dd6-4220-9ab7-f118deee876f-kube-api-access-gzrjw\") pod \"master-0-debug-9pjbl\" (UID: \"4d1af453-0dd6-4220-9ab7-f118deee876f\") " pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:49.901512 master-0 kubenswrapper[27819]: I0319 10:16:49.901179 27819 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d1af453-0dd6-4220-9ab7-f118deee876f-host\") pod \"master-0-debug-9pjbl\" (UID: \"4d1af453-0dd6-4220-9ab7-f118deee876f\") " pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:49.901512 master-0 kubenswrapper[27819]: I0319 10:16:49.901459 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d1af453-0dd6-4220-9ab7-f118deee876f-host\") pod \"master-0-debug-9pjbl\" (UID: \"4d1af453-0dd6-4220-9ab7-f118deee876f\") " pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:49.919076 master-0 kubenswrapper[27819]: I0319 10:16:49.918943 27819 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzrjw\" (UniqueName: \"kubernetes.io/projected/4d1af453-0dd6-4220-9ab7-f118deee876f-kube-api-access-gzrjw\") pod \"master-0-debug-9pjbl\" (UID: \"4d1af453-0dd6-4220-9ab7-f118deee876f\") " pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:49.978475 master-0 kubenswrapper[27819]: I0319 10:16:49.978399 27819 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" Mar 19 10:16:50.867618 master-0 kubenswrapper[27819]: I0319 10:16:50.867519 27819 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nzsnw/master-0-debug-9pjbl" event={"ID":"4d1af453-0dd6-4220-9ab7-f118deee876f","Type":"ContainerStarted","Data":"bde35db7909666174c4bebb2056ac05088fd61358d4db79541fb11c79f289fd9"} Mar 19 10:16:52.850306 master-0 kubenswrapper[27819]: I0319 10:16:52.848864 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-api-0_0ba79b34-019d-48a8-92db-d72841fe8936/cinder-255d6-api-log/0.log" Mar 19 10:16:52.908067 master-0 kubenswrapper[27819]: I0319 10:16:52.907957 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-api-0_0ba79b34-019d-48a8-92db-d72841fe8936/cinder-api/0.log" Mar 19 10:16:53.005794 master-0 kubenswrapper[27819]: I0319 10:16:53.003769 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-backup-0_fc3c3fd4-1650-4012-9488-cba497b6776e/cinder-backup/0.log" Mar 19 10:16:53.045669 master-0 kubenswrapper[27819]: I0319 10:16:53.044840 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-backup-0_fc3c3fd4-1650-4012-9488-cba497b6776e/cinder-backup/1.log" Mar 19 10:16:53.097268 master-0 kubenswrapper[27819]: I0319 10:16:53.097220 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-backup-0_fc3c3fd4-1650-4012-9488-cba497b6776e/probe/0.log" Mar 19 10:16:53.172465 master-0 kubenswrapper[27819]: I0319 10:16:53.172329 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-scheduler-0_d035bd9e-47a7-4d0a-a754-b98bc99dd02b/cinder-scheduler/0.log" Mar 19 10:16:53.237274 master-0 kubenswrapper[27819]: I0319 10:16:53.237216 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-scheduler-0_d035bd9e-47a7-4d0a-a754-b98bc99dd02b/cinder-scheduler/1.log" Mar 19 10:16:53.278135 master-0 kubenswrapper[27819]: I0319 10:16:53.272805 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-scheduler-0_d035bd9e-47a7-4d0a-a754-b98bc99dd02b/probe/0.log" Mar 19 10:16:53.432558 master-0 kubenswrapper[27819]: I0319 10:16:53.432409 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-volume-lvm-iscsi-0_db5a3772-4afd-478b-85d3-dd454056f3b9/cinder-volume/0.log" Mar 19 10:16:53.463599 master-0 kubenswrapper[27819]: I0319 10:16:53.462994 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-volume-lvm-iscsi-0_db5a3772-4afd-478b-85d3-dd454056f3b9/cinder-volume/1.log" Mar 19 10:16:53.495902 master-0 kubenswrapper[27819]: I0319 10:16:53.495837 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-255d6-volume-lvm-iscsi-0_db5a3772-4afd-478b-85d3-dd454056f3b9/probe/0.log" Mar 19 10:16:53.516831 master-0 kubenswrapper[27819]: I0319 10:16:53.516753 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb5c99df-95vwz_438f502b-dac1-4794-af36-c1779196e21c/dnsmasq-dns/0.log" Mar 19 10:16:53.524463 master-0 kubenswrapper[27819]: I0319 10:16:53.524334 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-bb5c99df-95vwz_438f502b-dac1-4794-af36-c1779196e21c/init/0.log" Mar 19 10:16:53.616556 master-0 kubenswrapper[27819]: I0319 10:16:53.616485 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-ae80b-default-external-api-0_cce27e6f-3c8a-4763-9862-87419f46e912/glance-log/0.log" Mar 19 10:16:53.646071 master-0 kubenswrapper[27819]: I0319 10:16:53.645984 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-ae80b-default-external-api-0_cce27e6f-3c8a-4763-9862-87419f46e912/glance-httpd/0.log" Mar 19 10:16:53.779172 master-0 kubenswrapper[27819]: I0319 10:16:53.778321 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-ae80b-default-internal-api-0_82c500ca-9291-43ed-9fa7-91debd8b6289/glance-log/0.log" Mar 19 10:16:54.162365 master-0 kubenswrapper[27819]: I0319 10:16:54.162306 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-ae80b-default-internal-api-0_82c500ca-9291-43ed-9fa7-91debd8b6289/glance-httpd/0.log" Mar 19 10:16:55.848609 master-0 kubenswrapper[27819]: I0319 10:16:55.833869 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7c76f89d9c-mf42h_100aa4ed-2375-45c8-b22b-6b981a05d693/ironic-api-log/0.log" Mar 19 10:16:55.915568 master-0 kubenswrapper[27819]: I0319 10:16:55.912938 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7c76f89d9c-mf42h_100aa4ed-2375-45c8-b22b-6b981a05d693/ironic-api/0.log" Mar 19 10:16:55.988617 master-0 kubenswrapper[27819]: I0319 10:16:55.988521 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-7c76f89d9c-mf42h_100aa4ed-2375-45c8-b22b-6b981a05d693/init/0.log" Mar 19 10:16:56.071092 master-0 kubenswrapper[27819]: I0319 10:16:56.071037 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_ca78928f-b0d4-4090-acba-66e98b7d312d/ironic-conductor/0.log" Mar 19 10:16:56.083110 master-0 kubenswrapper[27819]: I0319 10:16:56.083064 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_ca78928f-b0d4-4090-acba-66e98b7d312d/httpboot/0.log" Mar 19 10:16:56.148120 master-0 kubenswrapper[27819]: I0319 10:16:56.148008 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_ca78928f-b0d4-4090-acba-66e98b7d312d/dnsmasq/0.log" Mar 19 10:16:56.173569 master-0 kubenswrapper[27819]: I0319 10:16:56.173507 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_ca78928f-b0d4-4090-acba-66e98b7d312d/init/0.log" Mar 19 10:16:56.202069 master-0 kubenswrapper[27819]: I0319 10:16:56.202005 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_ca78928f-b0d4-4090-acba-66e98b7d312d/ironic-python-agent-init/0.log" Mar 19 10:16:57.189585 master-0 kubenswrapper[27819]: I0319 10:16:57.189434 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_ca78928f-b0d4-4090-acba-66e98b7d312d/pxe-init/0.log" Mar 19 10:16:57.363440 master-0 kubenswrapper[27819]: I0319 10:16:57.363385 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_d554c9a9-fcc8-489a-80b4-3d4fe58da11e/ironic-inspector-httpd/0.log" Mar 19 10:16:57.433585 master-0 kubenswrapper[27819]: I0319 10:16:57.429359 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 19 10:16:57.449621 master-0 kubenswrapper[27819]: I0319 10:16:57.446521 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_d554c9a9-fcc8-489a-80b4-3d4fe58da11e/ironic-inspector/0.log" Mar 19 10:16:57.494840 master-0 kubenswrapper[27819]: I0319 10:16:57.494616 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_d554c9a9-fcc8-489a-80b4-3d4fe58da11e/inspector-httpboot/0.log" Mar 19 10:16:57.512373 master-0 kubenswrapper[27819]: I0319 10:16:57.508208 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_d554c9a9-fcc8-489a-80b4-3d4fe58da11e/ramdisk-logs/0.log" Mar 19 10:16:57.532973 master-0 kubenswrapper[27819]: I0319 10:16:57.531693 27819 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_d554c9a9-fcc8-489a-80b4-3d4fe58da11e/inspector-dnsmasq/0.log"